You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Greg Ross <gr...@ngmoco.com> on 2012/10/01 22:35:02 UTC

long garbage collecting pause

Hi,

I'm having difficulty with a mapreduce job that has reducers that read
from and write to HBase, version 0.92.1, r1298924. Row sizes vary
greatly. As do the number of cells, although the number of cells is
typically numbered in the tens, at most. The max cell size is 1MB.

I see the following in the logs followed by the region server promptly
shutting down:

2012-10-01 19:08:47,858 [regionserver60020] WARN
org.apache.hadoop.hbase.util.Sleeper: We slept 28970ms instead of
3000ms, this is likely due to a long garbage collecting pause and it's
usually bad, see
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired

The full logs, including GC are below.

Although new to HBase, I've read up on the likely GC issues and their
remedies. I've implemented the recommended solutions and still to no
avail.

Here's what I've tried:

(1) increased the RAM to 4G
(2) set -XX:+UseConcMarkSweepGC
(3) set -XX:+UseParNewGC
(4) set -XX:CMSInitiatingOccupancyFraction=N where I've attempted N=[40..70]
(5) I've called context.progress() in the reducer before and after
reading and writing
(6) memstore is enabled

Is there anything else that I might have missed?

Thanks,

Greg


hbase logs
========

2012-10-01 19:09:48,293
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/.tmp/d2ee47650b224189b0c27d1c20929c03
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
2012-10-01 19:09:48,884
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 5 file(s) in U of
orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
into d2ee47650b224189b0c27d1c20929c03, size=723.0m; total size for
store is 723.0m
2012-10-01 19:09:48,884
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.,
storeName=U, fileCount=5, fileSize=1.4g, priority=2,
time=10631266687564968; duration=35sec
2012-10-01 19:09:48,886
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
2012-10-01 19:09:48,887
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 5
file(s) in U of
orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp,
seqid=132201184, totalSize=1.4g
2012-10-01 19:10:04,191
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp/2e5534fea8b24eaf9cc1e05dea788c01
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
2012-10-01 19:10:04,868
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 5 file(s) in U of
orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
into 2e5534fea8b24eaf9cc1e05dea788c01, size=626.5m; total size for
store is 626.5m
2012-10-01 19:10:04,868
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
storeName=U, fileCount=5, fileSize=1.4g, priority=2,
time=10631266696614208; duration=15sec
2012-10-01 19:14:04,992
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
2012-10-01 19:14:04,993
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp,
seqid=132198830, totalSize=863.8m
2012-10-01 19:14:19,147
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp/b741f8501ad248418c48262d751f6e86
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/U/b741f8501ad248418c48262d751f6e86
2012-10-01 19:14:19,381
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
into b741f8501ad248418c48262d751f6e86, size=851.4m; total size for
store is 851.4m
2012-10-01 19:14:19,381
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.,
storeName=U, fileCount=2, fileSize=863.8m, priority=5,
time=10631557965747111; duration=14sec
2012-10-01 19:14:19,381
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
2012-10-01 19:14:19,381
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
file(s) in U of
orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp,
seqid=132198819, totalSize=496.7m
2012-10-01 19:14:27,337
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp/78040c736c4149a884a1bdcda9916416
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/U/78040c736c4149a884a1bdcda9916416
2012-10-01 19:14:27,514
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 3 file(s) in U of
orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
into 78040c736c4149a884a1bdcda9916416, size=487.5m; total size for
store is 487.5m
2012-10-01 19:14:27,514
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.,
storeName=U, fileCount=3, fileSize=496.7m, priority=4,
time=10631557966599560; duration=8sec
2012-10-01 19:14:27,514
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
2012-10-01 19:14:27,514
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
file(s) in U of
orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp,
seqid=132200816, totalSize=521.7m
2012-10-01 19:14:36,962
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp/0142b8bcdda948c185887358990af6d1
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/U/0142b8bcdda948c185887358990af6d1
2012-10-01 19:14:37,171
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 3 file(s) in U of
orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
into 0142b8bcdda948c185887358990af6d1, size=510.7m; total size for
store is 510.7m
2012-10-01 19:14:37,171
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.,
storeName=U, fileCount=3, fileSize=521.7m, priority=4,
time=10631557967125617; duration=9sec
2012-10-01 19:14:37,172
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
2012-10-01 19:14:37,172
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
file(s) in U of
orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp,
seqid=132198832, totalSize=565.5m
2012-10-01 19:14:57,082
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp/44a27dce8df04306908579c22be76786
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/U/44a27dce8df04306908579c22be76786
2012-10-01 19:14:57,429
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 3 file(s) in U of
orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
into 44a27dce8df04306908579c22be76786, size=557.7m; total size for
store is 557.7m
2012-10-01 19:14:57,429
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.,
storeName=U, fileCount=3, fileSize=565.5m, priority=4,
time=10631557967207683; duration=20sec
2012-10-01 19:14:57,429
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
2012-10-01 19:14:57,430
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
file(s) in U of
orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp,
seqid=132199414, totalSize=845.6m
2012-10-01 19:16:54,394
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp/771813ba0c87449ebd99d5e7916244f8
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/U/771813ba0c87449ebd99d5e7916244f8
2012-10-01 19:16:54,636
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 3 file(s) in U of
orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
into 771813ba0c87449ebd99d5e7916244f8, size=827.3m; total size for
store is 827.3m
2012-10-01 19:16:54,636
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.,
storeName=U, fileCount=3, fileSize=845.6m, priority=4,
time=10631557967560440; duration=1mins, 57sec
2012-10-01 19:16:54,636
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
2012-10-01 19:16:54,637
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
file(s) in U of
orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp,
seqid=132198824, totalSize=1012.4m
2012-10-01 19:17:35,610
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp/771a4124c763468c8dea927cb53887ee
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/U/771a4124c763468c8dea927cb53887ee
2012-10-01 19:17:35,874
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 3 file(s) in U of
orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
into 771a4124c763468c8dea927cb53887ee, size=974.0m; total size for
store is 974.0m
2012-10-01 19:17:35,875
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.,
storeName=U, fileCount=3, fileSize=1012.4m, priority=4,
time=10631557967678796; duration=41sec
2012-10-01 19:17:35,875
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
2012-10-01 19:17:35,875
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
file(s) in U of
orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp,
seqid=132198815, totalSize=530.5m
2012-10-01 19:17:47,481
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp/24328f8244f747bf8ba81b74ef2893fa
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/U/24328f8244f747bf8ba81b74ef2893fa
2012-10-01 19:17:47,741
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 3 file(s) in U of
orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
into 24328f8244f747bf8ba81b74ef2893fa, size=524.0m; total size for
store is 524.0m
2012-10-01 19:17:47,741
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.,
storeName=U, fileCount=3, fileSize=530.5m, priority=4,
time=10631557967807915; duration=11sec
2012-10-01 19:17:47,741
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
2012-10-01 19:17:47,741
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
file(s) in U of
orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp,
seqid=132201190, totalSize=529.3m
2012-10-01 19:17:58,031
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp/cae48d1b96eb4440a7bcd5fa3b4c070b
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/U/cae48d1b96eb4440a7bcd5fa3b4c070b
2012-10-01 19:17:58,232
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 3 file(s) in U of
orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
into cae48d1b96eb4440a7bcd5fa3b4c070b, size=521.3m; total size for
store is 521.3m
2012-10-01 19:17:58,232
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.,
storeName=U, fileCount=3, fileSize=529.3m, priority=4,
time=10631557967959079; duration=10sec
2012-10-01 19:17:58,232
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
2012-10-01 19:17:58,232
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
file(s) in U of
orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp,
seqid=132199205, totalSize=475.2m
2012-10-01 19:18:06,764
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp/ba51afdc860048b6b2e1047b06fb3b29
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/U/ba51afdc860048b6b2e1047b06fb3b29
2012-10-01 19:18:07,065
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 3 file(s) in U of
orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
into ba51afdc860048b6b2e1047b06fb3b29, size=474.5m; total size for
store is 474.5m
2012-10-01 19:18:07,065
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.,
storeName=U, fileCount=3, fileSize=475.2m, priority=4,
time=10631557968104570; duration=8sec
2012-10-01 19:18:07,065
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
2012-10-01 19:18:07,065
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp,
seqid=132198822, totalSize=522.5m
2012-10-01 19:18:18,306
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp/7a0bd16b11f34887b2690e9510071bf0
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/U/7a0bd16b11f34887b2690e9510071bf0
2012-10-01 19:18:18,439
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
into 7a0bd16b11f34887b2690e9510071bf0, size=520.0m; total size for
store is 520.0m
2012-10-01 19:18:18,440
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.,
storeName=U, fileCount=2, fileSize=522.5m, priority=5,
time=10631557965863914; duration=11sec
2012-10-01 19:18:18,440
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
2012-10-01 19:18:18,440
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp,
seqid=132198823, totalSize=548.0m
2012-10-01 19:18:32,288
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp/dcd050acc2e747738a90aebaae8920e4
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/U/dcd050acc2e747738a90aebaae8920e4
2012-10-01 19:18:32,431
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
into dcd050acc2e747738a90aebaae8920e4, size=528.2m; total size for
store is 528.2m
2012-10-01 19:18:32,431
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.,
storeName=U, fileCount=2, fileSize=548.0m, priority=5,
time=10631557966071838; duration=13sec
2012-10-01 19:18:32,431
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
2012-10-01 19:18:32,431
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp,
seqid=132199001, totalSize=475.9m
2012-10-01 19:18:43,154
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp/15a9167cd9754fd4b3674fe732648a03
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/U/15a9167cd9754fd4b3674fe732648a03
2012-10-01 19:18:43,322
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
into 15a9167cd9754fd4b3674fe732648a03, size=475.9m; total size for
store is 475.9m
2012-10-01 19:18:43,322
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.,
storeName=U, fileCount=2, fileSize=475.9m, priority=5,
time=10631557966273447; duration=10sec
2012-10-01 19:18:43,322
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
2012-10-01 19:18:43,322
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp,
seqid=132198833, totalSize=824.8m
2012-10-01 19:19:00,252
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp/bf8da91da0824a909f684c3ecd0ee8da
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/U/bf8da91da0824a909f684c3ecd0ee8da
2012-10-01 19:19:00,788
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
into bf8da91da0824a909f684c3ecd0ee8da, size=803.0m; total size for
store is 803.0m
2012-10-01 19:19:00,788
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.,
storeName=U, fileCount=2, fileSize=824.8m, priority=5,
time=10631557966382580; duration=17sec
2012-10-01 19:19:00,788
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
2012-10-01 19:19:00,788
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp,
seqid=132198810, totalSize=565.3m
2012-10-01 19:19:11,311
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp/5cd2032f48bc4287b8866165dcb6d3e6
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/U/5cd2032f48bc4287b8866165dcb6d3e6
2012-10-01 19:19:11,504
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
into 5cd2032f48bc4287b8866165dcb6d3e6, size=553.5m; total size for
store is 553.5m
2012-10-01 19:19:11,504
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.,
storeName=U, fileCount=2, fileSize=565.3m, priority=5,
time=10631557966480961; duration=10sec
2012-10-01 19:19:11,504
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
2012-10-01 19:19:11,504
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp,
seqid=132198825, totalSize=519.6m
2012-10-01 19:19:22,186
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp/6f29b3b15f1747c196ac9aa79f4835b1
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/U/6f29b3b15f1747c196ac9aa79f4835b1
2012-10-01 19:19:22,437
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
into 6f29b3b15f1747c196ac9aa79f4835b1, size=512.7m; total size for
store is 512.7m
2012-10-01 19:19:22,437
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.,
storeName=U, fileCount=2, fileSize=519.6m, priority=5,
time=10631557966769107; duration=10sec
2012-10-01 19:19:22,437
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
2012-10-01 19:19:22,437
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp,
seqid=132198836, totalSize=528.3m
2012-10-01 19:19:34,752
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp/d836630f7e2b4212848d7e4edc7238f1
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/U/d836630f7e2b4212848d7e4edc7238f1
2012-10-01 19:19:34,945
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
into d836630f7e2b4212848d7e4edc7238f1, size=504.3m; total size for
store is 504.3m
2012-10-01 19:19:34,945
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.,
storeName=U, fileCount=2, fileSize=528.3m, priority=5,
time=10631557967026388; duration=12sec
2012-10-01 19:19:34,945
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
2012-10-01 19:19:34,945
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp,
seqid=132198841, totalSize=813.8m
2012-10-01 19:19:49,303
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp/c70692c971cd4e899957f9d5b189372e
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/U/c70692c971cd4e899957f9d5b189372e
2012-10-01 19:19:49,428
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
into c70692c971cd4e899957f9d5b189372e, size=813.7m; total size for
store is 813.7m
2012-10-01 19:19:49,428
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.,
storeName=U, fileCount=2, fileSize=813.8m, priority=5,
time=10631557967436197; duration=14sec
2012-10-01 19:19:49,428
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
2012-10-01 19:19:49,429
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp,
seqid=132198642, totalSize=812.0m
2012-10-01 19:20:38,718
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp/bf99f97891ed42f7847a11cfb8f46438
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/U/bf99f97891ed42f7847a11cfb8f46438
2012-10-01 19:20:38,825
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
into bf99f97891ed42f7847a11cfb8f46438, size=811.3m; total size for
store is 811.3m
2012-10-01 19:20:38,825
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.,
storeName=U, fileCount=2, fileSize=812.0m, priority=5,
time=10631557968183922; duration=49sec
2012-10-01 19:20:38,826
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
2012-10-01 19:20:38,826
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp,
seqid=132198138, totalSize=588.7m
2012-10-01 19:20:48,274
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp/9f44b7eeab58407ca98bb4ec90126035
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/U/9f44b7eeab58407ca98bb4ec90126035
2012-10-01 19:20:48,383
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
into 9f44b7eeab58407ca98bb4ec90126035, size=573.4m; total size for
store is 573.4m
2012-10-01 19:20:48,383
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.,
storeName=U, fileCount=2, fileSize=588.7m, priority=5,
time=10631557968302831; duration=9sec
2012-10-01 19:20:48,383
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
2012-10-01 19:20:48,383
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp,
seqid=132198644, totalSize=870.8m
2012-10-01 19:21:04,998
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp/920844c25b1847c6ac4b880e8cf1d5b0
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/U/920844c25b1847c6ac4b880e8cf1d5b0
2012-10-01 19:21:05,107
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
into 920844c25b1847c6ac4b880e8cf1d5b0, size=869.0m; total size for
store is 869.0m
2012-10-01 19:21:05,107
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.,
storeName=U, fileCount=2, fileSize=870.8m, priority=5,
time=10631557968521590; duration=16sec
2012-10-01 19:21:05,107
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
2012-10-01 19:21:05,107
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp,
seqid=132198622, totalSize=885.3m
2012-10-01 19:21:27,231
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp/c85d413975d642fc914253bd08f3484f
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/U/c85d413975d642fc914253bd08f3484f
2012-10-01 19:21:27,791
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
into c85d413975d642fc914253bd08f3484f, size=848.3m; total size for
store is 848.3m
2012-10-01 19:21:27,791
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.,
storeName=U, fileCount=2, fileSize=885.3m, priority=5,
time=10631557968628383; duration=22sec
2012-10-01 19:21:27,791
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
in region orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
2012-10-01 19:21:27,791
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
file(s) in U of
orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp,
seqid=132198621, totalSize=796.5m
2012-10-01 19:21:42,374
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp/ce543c630dd142309af6dca2a9ab5786
to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/U/ce543c630dd142309af6dca2a9ab5786
2012-10-01 19:21:42,515
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
of 2 file(s) in U of
orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
into ce543c630dd142309af6dca2a9ab5786, size=795.5m; total size for
store is 795.5m
2012-10-01 19:21:42,516
[regionserver60020-largeCompactions-1348577979539] INFO
org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
completed compaction:
regionName=orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.,
storeName=U, fileCount=2, fileSize=796.5m, priority=5,
time=10631557968713853; duration=14sec
2012-10-01 19:49:58,159 [ResponseProcessor for block
blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor
exception  for block
blk_5535637699691880681_51616301java.io.EOFException
    at java.io.DataInputStream.readFully(DataInputStream.java:180)
    at java.io.DataInputStream.readLong(DataInputStream.java:399)
    at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:120)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2634)

2012-10-01 19:49:58,167 [IPC Server handler 87 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
{"processingtimems":46208,"client":"10.100.102.155:38534","timeRange":[0,9223372036854775807],"starttimems":1349120951956,"responsesize":329939,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00322994","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
2012-10-01 19:49:58,160
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
not heard from server in 56633ms for sessionid 0x137ec64368509f7,
closing socket connection and attempting reconnect
2012-10-01 19:49:58,160 [regionserver60020] WARN
org.apache.hadoop.hbase.util.Sleeper: We slept 49116ms instead of
3000ms, this is likely due to a long garbage collecting pause and it's
usually bad, see
http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2012-10-01 19:49:58,160
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
not heard from server in 53359ms for sessionid 0x137ec64368509f6,
closing socket connection and attempting reconnect
2012-10-01 19:49:58,320 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] INFO
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 waiting for responder to exit.
2012-10-01 19:49:58,380 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
2012-10-01 19:49:58,380 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
10.100.102.88:50010, 10.100.102.122:50010: bad datanode
10.100.101.156:50010
2012-10-01 19:49:59,113 [regionserver60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: Unhandled
exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
rejected; currently processing
data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
org.apache.hadoop.hbase.YouAreDeadException:
org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
currently processing
data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:797)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:688)
    at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
currently processing
data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
    at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:222)
    at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:148)
    at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:844)
    at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:918)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
    at $Proxy8.regionServerReport(Unknown Source)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:794)
    ... 2 more
2012-10-01 19:49:59,114 [regionserver60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
abort: loaded coprocessors are: []
2012-10-01 19:49:59,397 [IPC Server handler 36 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
{"processingtimems":47521,"client":"10.100.102.176:60221","timeRange":[0,9223372036854775807],"starttimems":1349120951875,"responsesize":699312,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00318223","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
2012-10-01 19:50:00,355 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
primary datanode 10.100.102.122:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:00,355
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
2012-10-01 19:50:00,356
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
SASL-authenticate because the default JAAS configuration section
'Client' could not be found. If you are not using SASL, you may ignore
this. On the other hand, if you expected SASL to work, please fix your
JAAS configuration.
2012-10-01 19:50:00,356 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.122:50010 failed 1 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
retry...
2012-10-01 19:50:00,357
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
session
2012-10-01 19:50:00,358
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
server; r-o mode will be unavailable
2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
expired from ZooKeeper, aborting
org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired
    at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:374)
    at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:271)
    at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
    at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
2012-10-01 19:50:00,359
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
service, session 0x137ec64368509f6 has expired, closing socket
connection
2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
abort: loaded coprocessors are: []
2012-10-01 19:50:00,367 [regionserver60020-EventThread] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
numberOfStorefiles=189, storefileIndexSizeMB=15,
rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
totalStaticBloomSizeKB=0, memstoreSizeMB=113,
readRequestsCount=6744201, writeRequestsCount=904280,
compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
blockCacheCount=5435, blockCacheHitCount=321294212,
blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
hdfsBlocksLocalityIndex=97
2012-10-01 19:50:00,367 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
numberOfStorefiles=189, storefileIndexSizeMB=15,
rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
totalStaticBloomSizeKB=0, memstoreSizeMB=113,
readRequestsCount=6744201, writeRequestsCount=904280,
compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
blockCacheCount=5435, blockCacheHitCount=321294212,
blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
hdfsBlocksLocalityIndex=97
2012-10-01 19:50:00,381
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
2012-10-01 19:50:00,401 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unhandled
exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
rejected; currently processing
data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
2012-10-01 19:50:00,403
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
SASL-authenticate because the default JAAS configuration section
'Client' could not be found. If you are not using SASL, you may ignore
this. On the other hand, if you expected SASL to work, please fix your
JAAS configuration.
2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
expired from ZooKeeper, aborting
2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
org.apache.zookeeper.ClientCnxn: EventThread shut down
2012-10-01 19:50:00,412 [regionserver60020] INFO
org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020
2012-10-01 19:50:00,413
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
session
2012-10-01 19:50:00,413 [IPC Server handler 9 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
exiting
2012-10-01 19:50:00,413 [IPC Server handler 20 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
exiting
2012-10-01 19:50:00,413 [IPC Server handler 2 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
exiting
2012-10-01 19:50:00,413 [IPC Server handler 10 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
exiting
2012-10-01 19:50:00,413 [IPC Server listener on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on
60020
2012-10-01 19:50:00,413 [IPC Server handler 12 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 21 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 13 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 19 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 22 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 11 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
exiting
2012-10-01 19:50:00,414 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.SplitLogWorker: Sending interrupt
to stop the worker thread
2012-10-01 19:50:00,414 [IPC Server handler 6 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
exiting
2012-10-01 19:50:00,414 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Stopping
infoServer
2012-10-01 19:50:00,414 [IPC Server handler 0 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 28 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 7 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
exiting
2012-10-01 19:50:00,413 [IPC Server handler 15 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 5 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 48 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
exiting
2012-10-01 19:50:00,413 [IPC Server handler 14 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
exiting
2012-10-01 19:50:00,413 [IPC Server handler 18 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 37 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 47 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 50 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 45 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 36 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 43 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 42 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 38 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 8 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 40 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 34 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 4 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
exiting
2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@5fa9b60a,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320394"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.117:56438: output error
2012-10-01 19:50:00,414 [IPC Server handler 61 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 1768076108943205533:java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[connected local=/10.100.101.156:59104
remote=/10.100.101.156:50010]. 59988 millis timeout left.
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1243)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020
caught: java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
exiting
2012-10-01 19:50:00,414 [IPC Server handler 31 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
exiting
2012-10-01 19:50:00,414
[SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
SplitLogWorker interrupted while waiting for task, exiting:
java.lang.InterruptedException
2012-10-01 19:50:00,563
[SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
exiting
2012-10-01 19:50:00,414 [IPC Server handler 1 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 3201413024070455305:java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[connected local=/10.100.101.156:59115
remote=/10.100.101.156:50010]. 59999 millis timeout left.
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readShort(DataInputStream.java:295)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,563 [IPC Server Responder] INFO
org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2012-10-01 19:50:00,414 [IPC Server handler 27 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
exiting
2012-10-01 19:50:00,414
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
server; r-o mode will be unavailable
2012-10-01 19:50:00,414 [IPC Server handler 55 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block -2144655386884254555:java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[connected local=/10.100.101.156:59108
remote=/10.100.101.156:50010]. 60000 millis timeout left.
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readInt(DataInputStream.java:370)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1350)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,649
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
service, session 0x137ec64368509f7 has expired, closing socket
connection
2012-10-01 19:50:00,414 [IPC Server handler 39 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.173:50010 for file
/hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
for block -2100467641393578191:java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[connected local=/10.100.101.156:48825
remote=/10.100.102.173:50010]. 60000 millis timeout left.
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,414 [IPC Server handler 26 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block -5183799322211896791:java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[connected local=/10.100.101.156:59078
remote=/10.100.101.156:50010]. 59949 millis timeout left.
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,414 [IPC Server handler 85 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block -5183799322211896791:java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[connected local=/10.100.101.156:59082
remote=/10.100.101.156:50010]. 59950 millis timeout left.
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,414 [IPC Server handler 57 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block -1763662403960466408:java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[connected local=/10.100.101.156:59116
remote=/10.100.101.156:50010]. 60000 millis timeout left.
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readShort(DataInputStream.java:295)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,649 [IPC Server handler 79 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,649 [regionserver60020-EventThread] INFO
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
This client just lost it's session with ZooKeeper, trying to
reconnect.
2012-10-01 19:50:00,649 [IPC Server handler 89 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,649 [PRI IPC Server handler 3 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
exiting
2012-10-01 19:50:00,649 [PRI IPC Server handler 0 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
exiting
2012-10-01 19:50:00,700 [IPC Server handler 56 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
exiting
2012-10-01 19:50:00,649 [PRI IPC Server handler 2 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
exiting
2012-10-01 19:50:00,701 [IPC Server handler 54 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
exiting
2012-10-01 19:50:00,563 [IPC Server Responder] INFO
org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
2012-10-01 19:50:00,701 [IPC Server handler 71 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
exiting
2012-10-01 19:50:00,701 [IPC Server handler 79 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.193:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,563 [IPC Server handler 16 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,701 [PRI IPC Server handler 9 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
exiting
2012-10-01 19:50:00,563 [IPC Server handler 17 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,415 [IPC Server handler 60 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@7eee7b96,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321525"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.125:49043: output error
2012-10-01 19:50:00,704 [IPC Server handler 3 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 6550563574061266649:java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,717 [IPC Server handler 49 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
exiting
2012-10-01 19:50:00,717 [IPC Server handler 94 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
exiting
2012-10-01 19:50:00,701 [IPC Server handler 83 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
exiting
2012-10-01 19:50:00,701 [PRI IPC Server handler 1 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
exiting
2012-10-01 19:50:00,701 [PRI IPC Server handler 7 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
exiting
2012-10-01 19:50:00,701 [IPC Server handler 82 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
exiting
2012-10-01 19:50:00,701 [PRI IPC Server handler 6 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
exiting
2012-10-01 19:50:00,719 [IPC Server handler 16 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.107:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,701 [IPC Server handler 74 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
exiting
2012-10-01 19:50:00,719 [IPC Server handler 86 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
exiting
2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020
caught: java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
exiting
2012-10-01 19:50:00,701 [PRI IPC Server handler 5 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
exiting
2012-10-01 19:50:00,701 [regionserver60020] INFO org.mortbay.log:
Stopped SelectChannelConnector@0.0.0.0:60030
2012-10-01 19:50:00,722 [IPC Server handler 35 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
exiting
2012-10-01 19:50:00,722 [IPC Server handler 16 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.133:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,722 [IPC Server handler 98 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
exiting
2012-10-01 19:50:00,701 [IPC Server handler 68 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
exiting
2012-10-01 19:50:00,701 [IPC Server handler 64 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
exiting
2012-10-01 19:50:00,673 [IPC Server handler 33 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
exiting
2012-10-01 19:50:00,736 [IPC Server handler 76 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
exiting
2012-10-01 19:50:00,673 [regionserver60020-EventThread] INFO
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
Trying to reconnect to zookeeper
2012-10-01 19:50:00,736 [IPC Server handler 84 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
exiting
2012-10-01 19:50:00,736 [IPC Server handler 95 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
exiting
2012-10-01 19:50:00,736 [IPC Server handler 75 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
exiting
2012-10-01 19:50:00,736 [IPC Server handler 92 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
exiting
2012-10-01 19:50:00,736 [IPC Server handler 88 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
exiting
2012-10-01 19:50:00,736 [IPC Server handler 67 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
exiting
2012-10-01 19:50:00,736 [IPC Server handler 30 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
exiting
2012-10-01 19:50:00,736 [IPC Server handler 80 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
exiting
2012-10-01 19:50:00,736 [IPC Server handler 62 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
exiting
2012-10-01 19:50:00,722 [IPC Server handler 52 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
exiting
2012-10-01 19:50:00,722 [IPC Server handler 32 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
exiting
2012-10-01 19:50:00,722 [IPC Server handler 97 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
exiting
2012-10-01 19:50:00,722 [IPC Server handler 96 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
exiting
2012-10-01 19:50:00,722 [IPC Server handler 93 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
exiting
2012-10-01 19:50:00,722 [IPC Server handler 73 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
exiting
2012-10-01 19:50:00,722 [IPC Server handler 17 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.47:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,722 [IPC Server handler 87 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
exiting
2012-10-01 19:50:00,721 [IPC Server handler 81 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
exiting
2012-10-01 19:50:00,721 [IPC Server handler 90 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,721 [IPC Server handler 59 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
for block -9081461281107361903:java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,719 [IPC Server handler 65 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
exiting
2012-10-01 19:50:00,721 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 8387547514055202675:java.nio.channels.ClosedChannelException
    at java.nio.channels.spi.AbstractSelectableChannel.configureBlocking(AbstractSelectableChannel.java:252)
    at org.apache.hadoop.net.SocketIOWithTimeout.<init>(SocketIOWithTimeout.java:66)
    at org.apache.hadoop.net.SocketInputStream$Reader.<init>(SocketInputStream.java:50)
    at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:73)
    at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:91)
    at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:323)
    at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:299)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1474)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,721 [IPC Server handler 66 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 1768076108943205533:java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[connected local=/10.100.101.156:59074
remote=/10.100.101.156:50010]. 59947 millis timeout left.
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,811 [IPC Server handler 59 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.135:50010 for file
/hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,719 [IPC Server handler 58 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[connected local=/10.100.101.156:59107
remote=/10.100.101.156:50010]. 60000 millis timeout left.
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,831 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.153:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,719 [IPC Server handler 39 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.144:50010 for file
/hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,719 [IPC Server handler 26 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.138:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,852 [IPC Server handler 66 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.174:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,719 [IPC Server handler 41 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
for block 5946486101046455013:java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[connected local=/10.100.101.156:59091
remote=/10.100.101.156:50010]. 59953 millis timeout left.
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,719 [IPC Server handler 1 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.148:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,719 [IPC Server handler 53 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
exiting
2012-10-01 19:50:00,719 [IPC Server handler 79 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.154:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,719 [IPC Server handler 89 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.47:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,719 [IPC Server handler 46 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 4946845190538507957:java.io.InterruptedIOException:
Interruped while waiting for IO on channel
java.nio.channels.SocketChannel[connected local=/10.100.101.156:59113
remote=/10.100.101.156:50010]. 59999 millis timeout left.
    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readShort(DataInputStream.java:295)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,895 [IPC Server handler 26 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.139:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,701 [IPC Server handler 91 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
exiting
2012-10-01 19:50:00,717 [IPC Server handler 3 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.114:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 6550563574061266649:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.134:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,717 [PRI IPC Server handler 4 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
exiting
2012-10-01 19:50:00,717 [IPC Server handler 77 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
exiting
2012-10-01 19:50:00,717 [PRI IPC Server handler 8 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
exiting
2012-10-01 19:50:00,717 [IPC Server handler 99 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
exiting
2012-10-01 19:50:00,717 [IPC Server handler 85 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.138:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,717 [IPC Server handler 51 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
exiting
2012-10-01 19:50:00,717 [IPC Server handler 57 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.138:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,717 [IPC Server handler 55 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.180:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,717 [IPC Server handler 70 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
exiting
2012-10-01 19:50:00,717 [IPC Server handler 61 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.174:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.173:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,705 [IPC Server handler 23 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,705 [IPC Server handler 24 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 2851854722247682142:java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,704 [IPC Server handler 25 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
    at java.io.DataInputStream.read(DataInputStream.java:132)
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.97:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.144:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,904 [regionserver60020-EventThread] INFO
org.apache.zookeeper.ZooKeeper: Initiating client connection,
connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
sessionTimeout=180000 watcher=hconnection
2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.72:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_-2144655386884254555_51616216 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,904 [IPC Server handler 57 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.144:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,901 [IPC Server handler 85 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_-5183799322211896791_51616591 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_5937357897784147544_51616546 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,899 [IPC Server handler 3 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_6550563574061266649_51616152 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,896 [IPC Server handler 46 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_4946845190538507957_51616628 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,896 [IPC Server handler 41 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.133:50010 for file
/hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,896 [IPC Server handler 26 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_-5183799322211896791_51616591 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,896 [IPC Server handler 1 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.175:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,895 [IPC Server handler 66 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.97:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,894 [IPC Server handler 39 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.151:50010 for file
/hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,894 [IPC Server handler 79 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_2209451090614340242_51616188 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,857 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.101:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,856 [IPC Server handler 58 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.134:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,839 [IPC Server handler 59 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.194:50010 for file
/hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,811 [IPC Server handler 16 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_4946845190538507957_51616628 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,787 [IPC Server handler 90 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.134:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,780 [IPC Server handler 17 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.134:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,736 [IPC Server handler 63 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
exiting
2012-10-01 19:50:00,736 [IPC Server handler 72 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
exiting
2012-10-01 19:50:00,736 [IPC Server handler 78 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
exiting
2012-10-01 19:50:00,906 [IPC Server handler 59 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_-9081461281107361903_51616031 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,906 [IPC Server handler 39 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_-2100467641393578191_51531005 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,906 [IPC Server handler 41 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.145:50010 for file
/hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,905 [IPC Server handler 57 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_-1763662403960466408_51616605 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.162:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_-1763662403960466408_51616605 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,904 [IPC Server handler 24 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.72:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_1768076108943205533_51616106 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:00,941 [regionserver60020-SendThread()] INFO
org.apache.zookeeper.ClientCnxn: Opening socket connection to server
/10.100.102.197:2181
2012-10-01 19:50:00,941 [regionserver60020-EventThread] INFO
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
of this process is 20776@data3024.ngpipes.milp.ngmoco.com
2012-10-01 19:50:00,942
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
SASL-authenticate because the default JAAS configuration section
'Client' could not be found. If you are not using SASL, you may ignore
this. On the other hand, if you expected SASL to work, please fix your
JAAS configuration.
2012-10-01 19:50:00,943
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
session
2012-10-01 19:50:00,962
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
server; r-o mode will be unavailable
2012-10-01 19:50:00,962
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
sessionid = 0x137ec64373dd4b3, negotiated timeout = 40000
2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
Reconnected successfully. This disconnect could have been caused by a
network partition or a long-running GC pause, either way it's
recommended that you verify your environment.
2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
org.apache.zookeeper.ClientCnxn: EventThread shut down
2012-10-01 19:50:01,018 [IPC Server handler 41 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,018 [IPC Server handler 24 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.133:50010 for file
/hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_5946486101046455013_51616031 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:01,020 [IPC Server handler 25 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.162:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,021 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,023 [IPC Server handler 90 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.47:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,023 [IPC Server handler 17 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.47:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,024 [IPC Server handler 66 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.174:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,024 [IPC Server handler 61 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@20c6e4bc,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321393"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.118:57165: output error
2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:01,038 [IPC Server handler 58 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.134:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
exiting
2012-10-01 19:50:01,038 [IPC Server handler 1 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.148:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.97:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.153:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_1768076108943205533_51616106 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.102.101:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,041 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.156:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,042 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.153:50010 for file
/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,044 [IPC Server handler 1 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Failed to connect to
/10.100.101.175:50010 for file
/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)

2012-10-01 19:50:01,090 [IPC Server handler 29 on 60020] ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer:
java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03,
compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
[cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
[cacheCompressed=false],
firstKey=00321084/U:BAHAMUTIOS_1/1348883706322/Put,
lastKey=00324324/U:user/1348900694793/Put, avgKeyLen=31,
avgValueLen=125185, entries=6053, length=758129544,
cur=00321312/U:KINGDOMSQUESTSIPAD_2/1349024761759/Put/vlen=460950]
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Could not obtain block:
blk_8387547514055202675_51616042
file=/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    ... 17 more
2012-10-01 19:50:01,091 [IPC Server handler 24 on 60020] ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer:
java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
[cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
[cacheCompressed=false],
firstKey=00316914/U:PETCAT_1/1349118541277/Put,
lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
avgValueLen=89140, entries=7365, length=656954017,
cur=00318964/U:user/1349118541276/Put/vlen=311046]
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Could not obtain block:
blk_2851854722247682142_51616579
file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    ... 14 more
2012-10-01 19:50:01,091 [IPC Server handler 1 on 60020] ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer:
java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
[cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
[cacheCompressed=false],
firstKey=00316914/U:PETCAT_1/1349118541277/Put,
lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
avgValueLen=89140, entries=7365, length=656954017,
cur=0032027/U:KINGDOMSQUESTS_10/1349118531396/Put/vlen=401149]
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Could not obtain block:
blk_3201413024070455305_51616611
file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    ... 14 more
2012-10-01 19:50:01,091 [IPC Server handler 25 on 60020] ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer:
java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
[cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
[cacheCompressed=false],
firstKey=00316914/U:PETCAT_1/1349118541277/Put,
lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
avgValueLen=89140, entries=7365, length=656954017,
cur=00319173/U:TINYTOWERANDROID_3/1349024232716/Put/vlen=129419]
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Could not obtain block:
blk_2851854722247682142_51616579
file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    ... 14 more
2012-10-01 19:50:01,091 [IPC Server handler 90 on 60020] ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer:
java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
[cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
[cacheCompressed=false],
firstKey=00316914/U:PETCAT_1/1349118541277/Put,
lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
avgValueLen=89140, entries=7365, length=656954017,
cur=00316914/U:PETCAT_2/1349118542022/Put/vlen=499140]
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Could not obtain block:
blk_5937357897784147544_51616546
file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    ... 14 more
2012-10-01 19:50:01,091 [IPC Server handler 17 on 60020] ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer:
java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
[cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
[cacheCompressed=false],
firstKey=00316914/U:PETCAT_1/1349118541277/Put,
lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
avgValueLen=89140, entries=7365, length=656954017,
cur=00317054/U:BAHAMUTIOS_4/1348869430278/Put/vlen=104012]
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Could not obtain block:
blk_5937357897784147544_51616546
file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    ... 17 more
2012-10-01 19:50:01,091 [IPC Server handler 58 on 60020] ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer:
java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
[cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
[cacheCompressed=false],
firstKey=00316914/U:PETCAT_1/1349118541277/Put,
lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
avgValueLen=89140, entries=7365, length=656954017,
cur=00316983/U:TINYTOWERANDROID_1/1349118439250/Put/vlen=417924]
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Could not obtain block:
blk_5937357897784147544_51616546
file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
    ... 14 more
2012-10-01 19:50:01,091 [IPC Server handler 89 on 60020] ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer:
java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
[cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
[cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
[cacheCompressed=false],
firstKey=00316914/U:PETCAT_1/1349118541277/Put,
lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
avgValueLen=89140, entries=7365, length=656954017,
cur=00317043/U:BAHAMUTANDROID_7/1348968079952/Put/vlen=419212]
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Could not obtain block:
blk_5937357897784147544_51616546
file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
    ... 17 more
2012-10-01 19:50:01,094 [IPC Server handler 58 on 60020] WARN
org.apache.hadoop.ipc.Client: interrupted waiting to send params to
server
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
2012-10-01 19:50:01,094 [IPC Server handler 17 on 60020] WARN
org.apache.hadoop.ipc.Client: interrupted waiting to send params to
server
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
2012-10-01 19:50:01,093 [IPC Server handler 90 on 60020] WARN
org.apache.hadoop.ipc.Client: interrupted waiting to send params to
server
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
2012-10-01 19:50:01,093 [IPC Server handler 25 on 60020] WARN
org.apache.hadoop.ipc.Client: interrupted waiting to send params to
server
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
2012-10-01 19:50:01,092 [IPC Server handler 1 on 60020] WARN
org.apache.hadoop.ipc.Client: interrupted waiting to send params to
server
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
2012-10-01 19:50:01,092 [IPC Server handler 24 on 60020] WARN
org.apache.hadoop.ipc.Client: interrupted waiting to send params to
server
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
2012-10-01 19:50:01,091 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.ipc.Client: interrupted waiting to send params to
server
java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
2012-10-01 19:50:01,095 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
2012-10-01 19:50:01,097 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
10.100.102.88:50010, 10.100.102.122:50010: bad datanode
10.100.101.156:50010
2012-10-01 19:50:01,115 [IPC Server handler 39 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@2743ecf8,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00390925"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.122:51758: output error
2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
exiting
2012-10-01 19:50:01,151 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
primary datanode 10.100.102.122:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:01,151 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.122:50010 failed 2 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
retry...
2012-10-01 19:50:01,153 [IPC Server handler 89 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@7137feec,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317043"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.68:55302: output error
2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
exiting
2012-10-01 19:50:01,156 [IPC Server handler 66 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@6b9a9eba,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321504"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.176:32793: output error
2012-10-01 19:50:01,157 [IPC Server handler 66 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:01,158 [IPC Server handler 66 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
exiting
2012-10-01 19:50:01,159 [IPC Server handler 41 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@586761c,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00391525"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.155:39850: output error
2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
exiting
2012-10-01 19:50:01,216 [regionserver60020.compactionChecker] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer$CompactionChecker:
regionserver60020.compactionChecker exiting
2012-10-01 19:50:01,216 [regionserver60020.logRoller] INFO
org.apache.hadoop.hbase.regionserver.LogRoller: LogRoller exiting.
2012-10-01 19:50:01,216 [regionserver60020.cacheFlusher] INFO
org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
regionserver60020.cacheFlusher exiting
2012-10-01 19:50:01,217 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: aborting server
data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
2012-10-01 19:50:01,218 [regionserver60020] INFO
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
Closed zookeeper sessionid=0x137ec64373dd4b3
2012-10-01 19:50:01,270
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,24294294,1349027918385.068e6f4f7b8a81fb21e49fe3ac47f262.
2012-10-01 19:50:01,271
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96510144,1348960969795.fe2a133a17d09a65a6b0d4fb60e6e051.
2012-10-01 19:50:01,272
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56499174,1349027424070.7f767ca333bef3dcdacc9a6c673a8350.
2012-10-01 19:50:01,273
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96515494,1348960969795.8ab4e1d9f4e4c388f3f4f18eec637e8a.
2012-10-01 19:50:01,273
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,98395724,1348969940123.08188cc246bf752c17cfe57f99970924.
2012-10-01 19:50:01,274
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
2012-10-01 19:50:01,275
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56604984,1348940650040.14639a082062e98abfea8ae3fff5d2c7.
2012-10-01 19:50:01,275
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56880144,1348969971950.ece85a086a310aacc2da259a3303e67e.
2012-10-01 19:50:01,276
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
2012-10-01 19:50:01,277
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,31267284,1348961229728.fc429276c44f5c274f00168f12128bad.
2012-10-01 19:50:01,278
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56569824,1348940809479.9808dac5b895fc9b8f9892c4b72b3804.
2012-10-01 19:50:01,279
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56425354,1349031095620.e4965f2e57729ff9537986da3e19258c.
2012-10-01 19:50:01,280
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96504305,1348964001164.77f75cf8ba76ebc4417d49f019317d0a.
2012-10-01 19:50:01,280
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,60743825,1348962513777.f377f704db5f0d000e36003338e017b1.
2012-10-01 19:50:01,283
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,09603014,1349026790546.d634bfe659bdf2f45ec89e53d2d38791.
2012-10-01 19:50:01,283
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,31274021,1348961229728.e93382b458a84c22f2e5aeb9efa737b5.
2012-10-01 19:50:01,285
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56462454,1348982699951.a2dafbd054bf65aa6f558dc9a2d839a1.
2012-10-01 19:50:01,286
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
Orwell,48814673,1348270987327.29818ea19d62126d5616a7ba7d7dae21.
2012-10-01 19:50:01,288
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56610954,1348940650040.3609c1bfc2be6936577b6be493e7e8d9.
2012-10-01 19:50:01,289
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
2012-10-01 19:50:01,289
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,05205763,1348941089603.957ea0e428ba6ff21174ecdda96f9fdc.
2012-10-01 19:50:01,289
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56349615,1348941138879.dfabbd25c59fd6c34a58d9eacf4c096f.
2012-10-01 19:50:01,292
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56503505,1349027424070.129160a78f13c17cc9ea16ff3757cda9.
2012-10-01 19:50:01,292
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,91248264,1348942310344.a93982b8f91f260814885bc0afb4fbb9.
2012-10-01 19:50:01,293
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,98646724,1348980566403.a4f2a16d1278ad1246068646c4886502.
2012-10-01 19:50:01,293
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56454594,1348982903997.7107c6a1b2117fb59f68210ce82f2cc9.
2012-10-01 19:50:01,294
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56564144,1348940809479.636092bb3ec2615b115257080427d091.
2012-10-01 19:50:01,295
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_user_events,06252594,1348582793143.499f0a0f4704afa873c83f141f5e0324.
2012-10-01 19:50:01,296
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56617164,1348941287729.3992a80a6648ab62753b4998331dcfdf.
2012-10-01 19:50:01,296
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,98390944,1348969940123.af160e450632411818fa8d01b2c2ed0b.
2012-10-01 19:50:01,297
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56703743,1348941223663.5cc2fcb82080dbf14956466c31f1d27c.
2012-10-01 19:50:01,297
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
2012-10-01 19:50:01,298
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56693584,1348942631318.f01b179c1fad1f18b97b37fc8f730898.
2012-10-01 19:50:01,299
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_user_events,12140615,1348582250428.7822f7f5ceea852b04b586fdf34debff.
2012-10-01 19:50:01,300
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
2012-10-01 19:50:01,300
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96420705,1348942597601.a063e06eb840ee49bb88474ee8e22160.
2012-10-01 19:50:01,300
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
2012-10-01 19:50:01,300
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96432674,1348961425148.1a793cf2137b9599193a1e2d5d9749c5.
2012-10-01 19:50:01,302
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
2012-10-01 19:50:01,303
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,44371574,1348961840615.00f5b4710a43f2ee75d324bebb054323.
2012-10-01 19:50:01,304
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,562fc921,1348941189517.cff261c585416844113f232960c8d6b4.
2012-10-01 19:50:01,304
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56323831,1348941216581.0b0f3bdb03ce9e4f58156a4143018e0e.
2012-10-01 19:50:01,305
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56480194,1349028080664.03a7046ffcec7e1f19cdb2f9890a353e.
2012-10-01 19:50:01,306
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56418294,1348940288044.c872be05981c047e8c1ee4765b92a74d.
2012-10-01 19:50:01,306
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,53590305,1348940776419.4c98d7846622f2d8dad4e998dae81d2b.
2012-10-01 19:50:01,307
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96445963,1348942353563.66a0f602720191bf21a1dfd12eec4a35.
2012-10-01 19:50:01,307
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
2012-10-01 19:50:01,307
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56305294,1348941189517.20f67941294c259e2273d3e0b7ae5198.
2012-10-01 19:50:01,308
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56516115,1348981132325.0f753cb87c1163d95d9d10077d6308db.
2012-10-01 19:50:01,309
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56796924,1348941269761.843e0aee0b15d67b810c7b3fe5a2dda7.
2012-10-01 19:50:01,309
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56440004,1348941150045.7033cb81a66e405d7bf45cd55ab010e3.
2012-10-01 19:50:01,309
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56317864,1348941124299.0de45283aa626fc83b2c026e1dd8bfec.
2012-10-01 19:50:01,310
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56809673,1348941834500.08244d4ed5f7fdf6d9ac9c73fbfd3947.
2012-10-01 19:50:01,310
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56894864,1348970959541.fc19a6ffe18f29203369d32ad1b102ce.
2012-10-01 19:50:01,311
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56382491,1348940876960.2392137bf0f4cb695c08c0fb22ce5294.
2012-10-01 19:50:01,312
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,95128264,1349026585563.5dc569af8afe0a84006b80612c15007f.
2012-10-01 19:50:01,312
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,5631146,1348941124299.b7c10be9855b5e8ba3a76852920627f9.
2012-10-01 19:50:01,312
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56710424,1348940462668.a370c149c232ebf4427e070eb28079bc.
2012-10-01 19:50:01,314 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Session: 0x137ec64373dd4b3 closed
2012-10-01 19:50:01,314 [regionserver60020-EventThread] INFO
org.apache.zookeeper.ClientCnxn: EventThread shut down
2012-10-01 19:50:01,314 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 78
regions to close
2012-10-01 19:50:01,317
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96497834,1348964001164.0b12f37b74b2124ef9f27d1ef0ebb17a.
2012-10-01 19:50:01,318
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56507574,1349027965795.79113c51d318a11286b39397ebbfdf04.
2012-10-01 19:50:01,319
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,24297525,1349027918385.047533f3d801709a26c895a01dcc1a73.
2012-10-01 19:50:01,320
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96439694,1348961425148.038e0e43a6e56760e4daae6f34bfc607.
2012-10-01 19:50:01,320
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,82811715,1348904784424.88fae4279f9806bef745d90f7ad37241.
2012-10-01 19:50:01,321
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56699434,1348941223663.ef3ccf0af60ee87450806b393f89cb6e.
2012-10-01 19:50:01,321
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
2012-10-01 19:50:01,322
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
2012-10-01 19:50:01,322
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
2012-10-01 19:50:01,323
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56465563,1348982699951.f34a29c0c4fc32e753d12db996ccc995.
2012-10-01 19:50:01,324
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56450734,1349027937173.c70110b3573a48299853117c4287c7be.
2012-10-01 19:50:01,325
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56361984,1349029457686.6c8d6974741e59df971da91c7355de1c.
2012-10-01 19:50:01,327
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56814705,1348962077056.69fd74167a3c5c2961e45d339b962ca9.
2012-10-01 19:50:01,327
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,00389105,1348978080963.6463149a16179d4e44c19bb49e4b4a81.
2012-10-01 19:50:01,329
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56558944,1348940893836.03bd1c0532949ec115ca8d5215dbb22f.
2012-10-01 19:50:01,330 [IPC Server handler 59 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@112ba2bf,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00392783"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.135:34935: output error
2012-10-01 19:50:01,330
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,5658955,1349027142822.e65d0c1f452cb41d47ad08560c653607.
2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:01,331
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56402364,1349049689267.27b452f3bcce0815b7bf92370cbb51de.
2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
exiting
2012-10-01 19:50:01,332
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96426544,1348942597601.addf704f99dd1b2e07b3eff505e2c811.
2012-10-01 19:50:01,333
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,60414161,1348962852909.c6b1b21f00bbeef8648c4b9b3d28b49a.
2012-10-01 19:50:01,333
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56552794,1348940893836.5314886f88f6576e127757faa25cef7c.
2012-10-01 19:50:01,335
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56910924,1348962040261.fdedae86206fc091a72dde52a3d0d0b4.
2012-10-01 19:50:01,335
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56720084,1349029064698.ee5cb00ab358be0d2d36c59189da32f8.
2012-10-01 19:50:01,336
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56624533,1348941287729.6121fce2c31d4754b4ad4e855d85b501.
2012-10-01 19:50:01,336
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56899934,1348970959541.f34f01dd65e293cb6ab13de17ac91eec.
2012-10-01 19:50:01,337
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
2012-10-01 19:50:01,337
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56405923,1349049689267.bb4be5396608abeff803400cdd2408f4.
2012-10-01 19:50:01,338
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56364924,1349029457686.1e1c09b6eb734d8ad48ea0b4fa103381.
2012-10-01 19:50:01,339
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56784073,1348961864297.f01eaf712e59a0bca989ced951caf4f1.
2012-10-01 19:50:01,340
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56594534,1349027142822.8e67bb85f4906d579d4d278d55efce0b.
2012-10-01 19:50:01,340
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
2012-10-01 19:50:01,340
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56491525,1349027928183.7bbfb4d39ef4332e17845001191a6ad4.
2012-10-01 19:50:01,341
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,07123624,1348959804638.c114ec80c6693a284741e220da028736.
2012-10-01 19:50:01,342
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
2012-10-01 19:50:01,342
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56546534,1348941049708.bde2614732f938db04fdd81ed6dbfcf2.
2012-10-01 19:50:01,343
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,569054,1348962040261.a7942d7837cd57b68d156d2ce7e3bd5f.
2012-10-01 19:50:01,343
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56576714,1348982931576.3dd5bf244fb116cf2b6f812fcc39ad2d.
2012-10-01 19:50:01,344
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,5689007,1348963034009.c4b16ea4d8dbc66c301e67d8e58a7e48.
2012-10-01 19:50:01,344
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56410784,1349027912141.6de7be1745c329cf9680ad15e9bde594.
2012-10-01 19:50:01,345
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
2012-10-01 19:50:01,345
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96457954,1348964300132.674a03f0c9866968aabd70ab38a482c0.
2012-10-01 19:50:01,346
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56483084,1349027988535.de732d7e63ea53331b80255f51fc1a86.
2012-10-01 19:50:01,347
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56790484,1348941269761.5bcc58c48351de449cc17307ab4bf777.
2012-10-01 19:50:01,348
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56458293,1348982903997.4f67e6f4949a2ef7f4903f78f54c474e.
2012-10-01 19:50:01,348
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,95123235,1349026585563.a359eb4cb88d34a529804e50a5affa24.
2012-10-01 19:50:01,349
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
2012-10-01 19:50:01,350
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56368484,1348941099873.cef2729093a0d7d72b71fac1b25c0a40.
2012-10-01 19:50:01,350
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,17499894,1349026916228.630196a553f73069b9e568e6912ef0c5.
2012-10-01 19:50:01,351
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56375315,1348940876960.40cf6dfa370ce7f1fc6c1a59ba2f2191.
2012-10-01 19:50:01,351
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,95512574,1349009451986.e4d292eb66d16c21ef8ae32254334850.
2012-10-01 19:50:01,352
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
2012-10-01 19:50:01,352
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
2012-10-01 19:50:01,353
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56432705,1348941150045.07aa626f3703c7b4deaba1263c71894d.
2012-10-01 19:50:01,353
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,13118725,1349026772953.c0be859d4a4dc2246d764a8aad58fe88.
2012-10-01 19:50:01,354
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56520814,1348981132325.c2f16fd16f83aa51769abedfe8968bb6.
2012-10-01 19:50:01,354
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
2012-10-01 19:50:01,355
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56884434,1348963034009.616835869c81659a27eab896f48ae4e1.
2012-10-01 19:50:01,355
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56476541,1349028080664.341392a325646f24a3d8b8cd27ebda19.
2012-10-01 19:50:01,357
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56803462,1348941834500.6313b36f1949381d01df977a182e6140.
2012-10-01 19:50:01,357
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96464524,1348964300132.7a15f1e8e28f713212c516777267c2bf.
2012-10-01 19:50:01,358
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56875074,1348969971950.3e408e7cb32c9213d184e10bf42837ad.
2012-10-01 19:50:01,359
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,42862354,1348981565262.7ad46818060be413140cdcc11312119d.
2012-10-01 19:50:01,359
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56582264,1349028973106.b481b61be387a041a3f259069d5013a6.
2012-10-01 19:50:01,360
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56579105,1348982931576.1561a22c16263dccb8be07c654b43f2f.
2012-10-01 19:50:01,360
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56723415,1348946404223.38d992d687ad8925810be4220a732b13.
2012-10-01 19:50:01,361
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,4285921,1348981565262.7a2cbd8452b9e406eaf1a5ebff64855a.
2012-10-01 19:50:01,362
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56336394,1348941231573.ca52393a2eabae00a64f65c0b657b95a.
2012-10-01 19:50:01,363
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,96452715,1348942353563.876edfc6e978879aac42bfc905a09c26.
2012-10-01 19:50:01,363
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
2012-10-01 19:50:01,364
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56525625,1348941298909.ccf16ed8e761765d2989343c7670e94f.
2012-10-01 19:50:01,365
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,97578484,1348938848996.98ecacc61ae4c5b3f7a3de64bec0e026.
2012-10-01 19:50:01,365
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56779025,1348961864297.cc13f0a6f5e632508f2e28a174ef1488.
2012-10-01 19:50:01,366
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
2012-10-01 19:50:01,366
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_user_events,43323443,1348591057882.8b0ab02c33f275114d89088345f58885.
2012-10-01 19:50:01,367
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
2012-10-01 19:50:01,367
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,56686234,1348942631318.69270cd5013f8ca984424e508878e428.
2012-10-01 19:50:01,368
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,98642625,1348980566403.2277d2ef1d53d40d41cd23846619a3f8.
2012-10-01 19:50:01,524 [IPC Server handler 57 on 60020] INFO
org.apache.hadoop.hdfs.DFSClient: Could not obtain block
blk_3201413024070455305_51616611 from any node: java.io.IOException:
No live nodes contain current block. Will get new block locations from
namenode and retry...
2012-10-01 19:50:02,462 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 2
regions to close
2012-10-01 19:50:02,462 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
2012-10-01 19:50:02,462 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
10.100.102.88:50010, 10.100.102.122:50010: bad datanode
10.100.101.156:50010
2012-10-01 19:50:02,495 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
primary datanode 10.100.102.122:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:02,496 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.122:50010 failed 3 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
retry...
2012-10-01 19:50:02,686 [IPC Server handler 46 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@504b62c6,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320404"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.172:53925: output error
2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
exiting
2012-10-01 19:50:02,809 [IPC Server handler 55 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@45f1c31e,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322424"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.178:35016: output error
2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
exiting
2012-10-01 19:50:03,496 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
2012-10-01 19:50:03,496 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
10.100.102.88:50010, 10.100.102.122:50010: bad datanode
10.100.101.156:50010
2012-10-01 19:50:03,510 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
primary datanode 10.100.102.122:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:03,510 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.122:50010 failed 4 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
retry...
2012-10-01 19:50:05,299 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
2012-10-01 19:50:05,299 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
10.100.102.88:50010, 10.100.102.122:50010: bad datanode
10.100.101.156:50010
2012-10-01 19:50:05,314 [IPC Server handler 3 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@472aa9fe,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321694"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.176:42371: output error
2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
exiting
2012-10-01 19:50:05,329 [IPC Server handler 16 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@42987a12,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320293"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.135:35132: output error
2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
exiting
2012-10-01 19:50:05,638 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
primary datanode 10.100.102.122:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:05,638 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.122:50010 failed 5 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
retry...
2012-10-01 19:50:05,641 [IPC Server handler 26 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@a9c09e8,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319505"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.183:60078: output error
2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
exiting
2012-10-01 19:50:05,664 [IPC Server handler 57 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@349d7b4,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319915"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.141:58290: output error
2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
exiting
2012-10-01 19:50:07,063 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
2012-10-01 19:50:07,063 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
10.100.102.88:50010, 10.100.102.122:50010: bad datanode
10.100.101.156:50010
2012-10-01 19:50:07,076 [IPC Server handler 23 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@5ba03734,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319654"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.161:43227: output error
2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
exiting
2012-10-01 19:50:07,089 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
primary datanode 10.100.102.122:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:07,090 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.122:50010 failed 6 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010.
Marking primary datanode as bad.
2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@3d19e607,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319564"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.82:42779: output error
2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
exiting
2012-10-01 19:50:07,181
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@5920511b,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322014"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.88:49489: output error
2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
exiting
2012-10-01 19:50:08,064 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 1
regions to close
2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
org.apache.hadoop.hbase.regionserver.Leases:
regionserver60020.leaseChecker closing leases
2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
org.apache.hadoop.hbase.regionserver.Leases:
regionserver60020.leaseChecker closed leases
2012-10-01 19:50:08,508 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
primary datanode 10.100.101.156:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:08,508 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.101.156:50010 failed 1 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010. Will retry...
2012-10-01 19:50:09,652 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
primary datanode 10.100.101.156:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:09,653 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.101.156:50010 failed 2 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010. Will retry...
2012-10-01 19:50:10,697 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
primary datanode 10.100.101.156:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:10,697 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.101.156:50010 failed 3 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010. Will retry...
2012-10-01 19:50:12,278 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
primary datanode 10.100.101.156:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:12,279 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.101.156:50010 failed 4 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010. Will retry...
2012-10-01 19:50:13,294 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
primary datanode 10.100.101.156:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:13,294 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.101.156:50010 failed 5 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010. Will retry...
2012-10-01 19:50:14,306 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
primary datanode 10.100.101.156:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:14,306 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.101.156:50010 failed 6 times.  Pipeline was
10.100.101.156:50010, 10.100.102.88:50010. Marking primary datanode as
bad.
2012-10-01 19:50:15,317 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
primary datanode 10.100.102.88:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:15,318 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 1 times.  Pipeline was
10.100.102.88:50010. Will retry...
2012-10-01 19:50:16,375 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
primary datanode 10.100.102.88:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:16,376 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 2 times.  Pipeline was
10.100.102.88:50010. Will retry...
2012-10-01 19:50:17,385 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
primary datanode 10.100.102.88:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:17,385 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 3 times.  Pipeline was
10.100.102.88:50010. Will retry...
2012-10-01 19:50:18,395 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
primary datanode 10.100.102.88:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:18,395 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 4 times.  Pipeline was
10.100.102.88:50010. Will retry...
2012-10-01 19:50:19,404 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
primary datanode 10.100.102.88:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:19,405 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 5 times.  Pipeline was
10.100.102.88:50010. Will retry...
2012-10-01 19:50:20,414 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
primary datanode 10.100.102.88:50010
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
blk_5535637699691880681_51616301 is already commited, storedBlock ==
null.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy4.nextGenerationStamp(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy14.recoverBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:20,415 [DataStreamer for file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
block blk_5535637699691880681_51616301] WARN
org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
2012-10-01 19:50:20,415 [IPC Server handler 58 on 60020] ERROR
org.apache.hadoop.hdfs.DFSClient: Exception closing file
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
: java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:20,415 [IPC Server handler 69 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Error while syncing
java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:20,415 [regionserver60020.logSyncer] WARN
org.apache.hadoop.hdfs.DFSClient: Error while syncing
java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] WARN
org.apache.hadoop.hdfs.DFSClient: Error while syncing
java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] INFO
org.apache.hadoop.fs.FileSystem: Could not cancel cleanup thread,
though no FileSystems are open
2012-10-01 19:50:20,417 [regionserver60020.logSyncer] WARN
org.apache.hadoop.hdfs.DFSClient: Error while syncing
java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:20,417 [regionserver60020.logSyncer] FATAL
org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
Requesting close of hlog
java.io.IOException: Reflection
    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
    ... 4 more
Caused by: java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:20,418 [regionserver60020.logSyncer] ERROR
org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
requesting close of hlog
java.io.IOException: Reflection
    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
    ... 4 more
Caused by: java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
Requesting close of hlog
java.io.IOException: Reflection
    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
    at org.apache.hadoop.hbase.regionserver.wal.HLog.append(HLog.java:1033)
    at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:1852)
    at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1723)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3076)
    at sun.reflect.GeneratedMethodAccessor96.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
    ... 11 more
Caused by: java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:20,417 [IPC Server handler 29 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
System not available
java.io.IOException: File system is not available
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: java.lang.InterruptedException
    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    ... 9 more
Caused by: java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    ... 21 more
2012-10-01 19:50:20,417 [IPC Server handler 24 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
System not available
java.io.IOException: File system is not available
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: java.lang.InterruptedException
    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    ... 9 more
Caused by: java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    ... 21 more
2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
abort: loaded coprocessors are: []
2012-10-01 19:50:20,417 [IPC Server handler 1 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
System not available
java.io.IOException: File system is not available
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: java.lang.InterruptedException
    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    ... 9 more
Caused by: java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    ... 21 more
2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
abort: loaded coprocessors are: []
2012-10-01 19:50:20,417 [IPC Server handler 25 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
System not available
java.io.IOException: File system is not available
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: java.lang.InterruptedException
    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    ... 9 more
Caused by: java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    ... 21 more
2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
abort: loaded coprocessors are: []
2012-10-01 19:50:20,417 [IPC Server handler 90 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
System not available
java.io.IOException: File system is not available
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: java.lang.InterruptedException
    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    ... 9 more
Caused by: java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    ... 21 more
2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
abort: loaded coprocessors are: []
2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
System not available
java.io.IOException: File system is not available
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: java.lang.InterruptedException
    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    ... 9 more
Caused by: java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    ... 21 more
2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
abort: loaded coprocessors are: []
2012-10-01 19:50:20,417 [IPC Server handler 17 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
System not available
java.io.IOException: File system is not available
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: java.lang.InterruptedException
    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy7.getFileInfo(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy7.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
    ... 9 more
Caused by: java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
    ... 21 more
2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
abort: loaded coprocessors are: []
2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
numberOfStorefiles=189, storefileIndexSizeMB=15,
rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
totalStaticBloomSizeKB=0, memstoreSizeMB=113,
readRequestsCount=6744201, writeRequestsCount=904280,
compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
blockCacheCount=5435, blockCacheHitCount=321294212,
blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
hdfsBlocksLocalityIndex=97
2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
numberOfStorefiles=189, storefileIndexSizeMB=15,
rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
totalStaticBloomSizeKB=0, memstoreSizeMB=113,
readRequestsCount=6744201, writeRequestsCount=904280,
compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
blockCacheCount=5435, blockCacheHitCount=321294212,
blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
hdfsBlocksLocalityIndex=97
2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
numberOfStorefiles=189, storefileIndexSizeMB=15,
rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
totalStaticBloomSizeKB=0, memstoreSizeMB=113,
readRequestsCount=6744201, writeRequestsCount=904280,
compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
blockCacheCount=5435, blockCacheHitCount=321294212,
blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
hdfsBlocksLocalityIndex=97
2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
numberOfStorefiles=189, storefileIndexSizeMB=15,
rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
totalStaticBloomSizeKB=0, memstoreSizeMB=113,
readRequestsCount=6744201, writeRequestsCount=904280,
compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
blockCacheCount=5435, blockCacheHitCount=321294212,
blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
hdfsBlocksLocalityIndex=97
2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
numberOfStorefiles=189, storefileIndexSizeMB=15,
rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
totalStaticBloomSizeKB=0, memstoreSizeMB=113,
readRequestsCount=6744201, writeRequestsCount=904280,
compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
blockCacheCount=5435, blockCacheHitCount=321294212,
blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
hdfsBlocksLocalityIndex=97
2012-10-01 19:50:20,420 [IPC Server handler 69 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: (responseTooSlow):
{"processingtimems":22039,"call":"multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb),
rpc version=1, client version=29,
methodsFingerPrint=54742778","client":"10.100.102.155:39852","starttimems":1349120998380,"queuetimems":0,"class":"HRegionServer","responsesize":0,"method":"multi"}
2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
numberOfStorefiles=189, storefileIndexSizeMB=15,
rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
totalStaticBloomSizeKB=0, memstoreSizeMB=113,
readRequestsCount=6744201, writeRequestsCount=904280,
compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1575,
maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
blockCacheCount=5435, blockCacheHitCount=321294212,
blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
hdfsBlocksLocalityIndex=97
2012-10-01 19:50:20,420
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING
region server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
Unrecoverable exception while closing region
orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
still finishing close
java.io.IOException: Filesystem closed
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
    at java.io.FilterInputStream.close(FilterInputStream.java:155)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
    at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
    at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
    at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
    at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
    at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
2012-10-01 19:50:20,426
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
abort: loaded coprocessors are: []
2012-10-01 19:50:20,419 [IPC Server handler 29 on 60020] FATAL
org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
abort: loaded coprocessors are: []
2012-10-01 19:50:20,426
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
metrics: requestsPerSecond=0, numberOfOnlineRegions=136,
numberOfStores=136, numberOfStorefiles=189, storefileIndexSizeMB=15,
rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
totalStaticBloomSizeKB=0, memstoreSizeMB=113,
readRequestsCount=6744201, writeRequestsCount=904280,
compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
blockCacheCount=5435, blockCacheHitCount=321294212,
blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
hdfsBlocksLocalityIndex=97
2012-10-01 19:50:20,426 [IPC Server handler 29 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
numberOfStorefiles=189, storefileIndexSizeMB=15,
rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
totalStaticBloomSizeKB=0, memstoreSizeMB=113,
readRequestsCount=6744201, writeRequestsCount=904280,
compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
blockCacheCount=5435, blockCacheHitCount=321294212,
blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
hdfsBlocksLocalityIndex=97
2012-10-01 19:50:20,445 [IPC Server handler 58 on 60020] WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
fatal error to master
java.lang.reflect.UndeclaredThrowableException
    at $Proxy8.reportRSFatalError(Unknown Source)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Call to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
local exception: java.nio.channels.ClosedChannelException
    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
    ... 11 more
Caused by: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.FilterInputStream.read(FilterInputStream.java:116)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readInt(DataInputStream.java:370)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
2012-10-01 19:50:20,446 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
fatal error to master
java.lang.reflect.UndeclaredThrowableException
    at $Proxy8.reportRSFatalError(Unknown Source)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Call to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
local exception: java.nio.channels.ClosedByInterruptException
    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
    ... 11 more
Caused by: java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
    at java.io.DataOutputStream.flush(DataOutputStream.java:106)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
    ... 12 more
2012-10-01 19:50:20,447 [IPC Server handler 29 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
System not available
2012-10-01 19:50:20,446 [IPC Server handler 58 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
System not available
2012-10-01 19:50:20,446 [IPC Server handler 17 on 60020] WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
fatal error to master
java.lang.reflect.UndeclaredThrowableException
    at $Proxy8.reportRSFatalError(Unknown Source)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:328)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:362)
    at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1045)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:897)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
    ... 11 more
2012-10-01 19:50:20,448 [IPC Server handler 17 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
System not available
2012-10-01 19:50:20,445 [IPC Server handler 1 on 60020] WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
fatal error to master
java.lang.reflect.UndeclaredThrowableException
    at $Proxy8.reportRSFatalError(Unknown Source)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Call to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
local exception: java.nio.channels.ClosedChannelException
    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
    ... 11 more
Caused by: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.FilterInputStream.read(FilterInputStream.java:116)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readInt(DataInputStream.java:370)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
2012-10-01 19:50:20,448 [IPC Server handler 1 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
System not available
2012-10-01 19:50:20,445
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to
report fatal error to master
java.lang.reflect.UndeclaredThrowableException
    at $Proxy8.reportRSFatalError(Unknown Source)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:131)
    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Call to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
local exception: java.nio.channels.ClosedByInterruptException
    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
    ... 7 more
Caused by: java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
    at java.io.DataOutputStream.flush(DataOutputStream.java:106)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
    at $Proxy8.reportRSFatalError(Unknown Source)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
2012-10-01 19:50:20,450
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
Unrecoverable exception while closing region
orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
still finishing close
2012-10-01 19:50:20,445 [IPC Server handler 69 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb), rpc
version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.155:39852: output error
2012-10-01 19:50:20,445 [IPC Server handler 24 on 60020] WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
fatal error to master
java.lang.reflect.UndeclaredThrowableException
    at $Proxy8.reportRSFatalError(Unknown Source)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Call to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
local exception: java.nio.channels.ClosedChannelException
    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
    ... 11 more
Caused by: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.FilterInputStream.read(FilterInputStream.java:116)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readInt(DataInputStream.java:370)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
2012-10-01 19:50:20,451 [IPC Server handler 24 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
System not available
2012-10-01 19:50:20,445 [IPC Server handler 90 on 60020] WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
fatal error to master
java.lang.reflect.UndeclaredThrowableException
    at $Proxy8.reportRSFatalError(Unknown Source)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Call to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
local exception: java.nio.channels.ClosedChannelException
    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
    ... 11 more
Caused by: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.FilterInputStream.read(FilterInputStream.java:116)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readInt(DataInputStream.java:370)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
2012-10-01 19:50:20,451 [IPC Server handler 90 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
System not available
2012-10-01 19:50:20,445 [IPC Server handler 25 on 60020] WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
fatal error to master
java.lang.reflect.UndeclaredThrowableException
    at $Proxy8.reportRSFatalError(Unknown Source)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
Caused by: java.io.IOException: Call to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
local exception: java.nio.channels.ClosedChannelException
    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
    ... 11 more
Caused by: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    at java.io.FilterInputStream.read(FilterInputStream.java:116)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
    at java.io.DataInputStream.readInt(DataInputStream.java:370)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
2012-10-01 19:50:20,452 [IPC Server handler 25 on 60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
System not available
2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@5d72e577,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321312"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.184:34111: output error
2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@2237178f,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316983"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.188:59581: output error
2012-10-01 19:50:20,450 [IPC Server handler 69 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
exiting
2012-10-01 19:50:20,450
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable
while processing event M_RS_CLOSE_REGION
java.lang.RuntimeException: java.io.IOException: Filesystem closed
    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:133)
    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Filesystem closed
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
    at java.io.FilterInputStream.close(FilterInputStream.java:155)
    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
    at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
    at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
    at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
    at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
    at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
    ... 4 more
2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@573dba6d,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"0032027"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.183:60076: output error
2012-10-01 19:50:20,452 [IPC Server handler 69 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
exiting
2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@4eebbed5,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317054"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.146:40240: output error
2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:20,453 [IPC Server handler 29 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
exiting
2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
exiting
2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
exiting
2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@4ff0ed4a,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00318964"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.172:53924: output error
2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
exiting
2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@526abe46,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316914"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.101.184:34110: output error
2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
exiting
2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
get([B@5df20fef,
{"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319173"}),
rpc version=1, client version=29, methodsFingerPrint=54742778 from
10.100.102.146:40243: output error
2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020
caught: java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)

2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
exiting
2012-10-01 19:50:21,066
[RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
org.apache.hadoop.hdfs.DFSClient: Error while syncing
java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
org.apache.hadoop.hdfs.DFSClient: Error while syncing
java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:21,418 [regionserver60020.logSyncer] FATAL
org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
Requesting close of hlog
java.io.IOException: Reflection
    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
    ... 4 more
Caused by: java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:21,419 [regionserver60020.logSyncer] ERROR
org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
requesting close of hlog
java.io.IOException: Reflection
    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
    ... 4 more
Caused by: java.io.IOException: Error Recovery for block
blk_5535637699691880681_51616301 failed  because recovery from primary
datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
10.100.102.88:50010. Aborting...
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
2012-10-01 19:50:22,066 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; all regions
closed.
2012-10-01 19:50:22,066 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closing
leases
2012-10-01 19:50:22,066 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closed
leases
2012-10-01 19:50:22,082 [regionserver60020] WARN
org.apache.hadoop.hbase.regionserver.HRegionServer: Failed deleting my
ephemeral node
org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for
/hbase/rs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:868)
    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:107)
    at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:962)
    at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:951)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:964)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:762)
    at java.lang.Thread.run(Thread.java:662)
2012-10-01 19:50:22,082 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; zookeeper
connection closed.
2012-10-01 19:50:22,082 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: regionserver60020
exiting
2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
starting; hbase.shutdown.hook=true;
fsShutdownHook=Thread[Thread-5,5,main]
2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown
hook
2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs
shutdown hook thread.
2012-10-01 19:50:22,124 [Shutdownhook:regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
finished.
Mon Oct  1 19:54:10 UTC 2012 Starting regionserver on
data3024.ngpipes.milp.ngmoco.com
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32768
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
2012-10-01 19:54:11,355 [main] INFO
org.apache.hadoop.hbase.util.VersionInfo: HBase 0.92.1
2012-10-01 19:54:11,356 [main] INFO
org.apache.hadoop.hbase.util.VersionInfo: Subversion
https://svn.apache.org/repos/asf/hbase/branches/0.92 -r 1298924
2012-10-01 19:54:11,356 [main] INFO
org.apache.hadoop.hbase.util.VersionInfo: Compiled by jenkins on Fri
Mar  9 16:58:34 UTC 2012
2012-10-01 19:54:11,513 [main] INFO
org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java
HotSpot(TM) 64-Bit Server VM, vmVendor=Sun Microsystems Inc.,
vmVersion=20.1-b02
2012-10-01 19:54:11,513 [main] INFO
org.apache.hadoop.hbase.util.ServerCommandLine:
vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx4000m,
-XX:NewSize=128m, -XX:MaxNewSize=128m,
-XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
-XX:CMSInitiatingOccupancyFraction=75, -verbose:gc,
-XX:+PrintGCDetails, -XX:+PrintGCTimeStamps,
-Xloggc:/data2/hbase_log/gc-hbase.log,
-Dcom.sun.management.jmxremote.authenticate=true,
-Dcom.sun.management.jmxremote.ssl=false,
-Dcom.sun.management.jmxremote.password.file=/home/hadoop/hadoop/conf/jmxremote.password,
-Dcom.sun.management.jmxremote.port=8010,
-Dhbase.log.dir=/data2/hbase_log,
-Dhbase.log.file=hbase-hadoop-regionserver-data3024.ngpipes.milp.ngmoco.com.log,
-Dhbase.home.dir=/home/hadoop/hbase, -Dhbase.id.str=hadoop,
-Dhbase.root.logger=INFO,DRFA,
-Djava.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64]
2012-10-01 19:54:11,964 [IPC Reader 0 on port 60020] INFO
org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-10-01 19:54:11,967 [IPC Reader 1 on port 60020] INFO
org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-10-01 19:54:11,970 [IPC Reader 2 on port 60020] INFO
org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-10-01 19:54:11,973 [IPC Reader 3 on port 60020] INFO
org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-10-01 19:54:11,976 [IPC Reader 4 on port 60020] INFO
org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-10-01 19:54:11,979 [IPC Reader 5 on port 60020] INFO
org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-10-01 19:54:11,982 [IPC Reader 6 on port 60020] INFO
org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-10-01 19:54:11,985 [IPC Reader 7 on port 60020] INFO
org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-10-01 19:54:11,988 [IPC Reader 8 on port 60020] INFO
org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-10-01 19:54:11,991 [IPC Reader 9 on port 60020] INFO
org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
2012-10-01 19:54:12,002 [main] INFO
org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics
with hostName=HRegionServer, port=60020
2012-10-01 19:54:12,081 [main] INFO
org.apache.hadoop.hbase.io.hfile.CacheConfig: Allocating LruBlockCache
with maximum size 996.8m
2012-10-01 19:54:12,221 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client
environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48
GMT
2012-10-01 19:54:12,221 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client
environment:host.name=data3024.ngpipes.milp.ngmoco.com
2012-10-01 19:54:12,221 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client
environment:java.version=1.6.0_26
2012-10-01 19:54:12,221 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun
Microsystems Inc.
2012-10-01 19:54:12,221 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client
environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre
2012-10-01 19:54:12,221 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client
environment:java.class.path=/home/hadoop/hbase/conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase:/home/hadoop/hbase/hbase-0.92.1.jar:/home/hadoop/hbase/hbase-0.92.1-tests.jar:/home/hadoop/hbase/lib/activation-1.1.jar:/home/hadoop/hbase/lib/asm-3.1.jar:/home/hadoop/hbase/lib/avro-1.5.3.jar:/home/hadoop/hbase/lib/avro-ipc-1.5.3.jar:/home/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hbase/lib/commons-cli-1.2.jar:/home/hadoop/hbase/lib/commons-codec-1.4.jar:/home/hadoop/hbase/lib/commons-collections-3.2.1.jar:/home/hadoop/hbase/lib/commons-configuration-1.6.jar:/home/hadoop/hbase/lib/commons-digester-1.8.jar:/home/hadoop/hbase/lib/commons-el-1.0.jar:/home/hadoop/hbase/lib/commons-httpclient-3.1.jar:/home/hadoop/hbase/lib/commons-lang-2.5.jar:/home/hadoop/hbase/lib/commons-logging-1.1.1.jar:/home/hadoop/hbase/lib/commons-math-2.1.jar:/home/hadoop/hbase/lib/commons-net-1.4.1.jar:/home/hadoop/hbase/lib/core-3.1.1.jar:/home/hadoop/hbase/lib/guava-r09.jar:/home/hadoop/hbase/lib/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hbase/lib/hadoop-lzo-0.4.9.jar:/home/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/home/hadoop/hbase/lib/httpclient-4.0.1.jar:/home/hadoop/hbase/lib/httpcore-4.0.1.jar:/home/hadoop/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/home/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase/lib/jaxb-api-2.1.jar:/home/hadoop/hbase/lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase/lib/jersey-core-1.4.jar:/home/hadoop/hbase/lib/jersey-json-1.4.jar:/home/hadoop/hbase/lib/jersey-server-1.4.jar:/home/hadoop/hbase/lib/jettison-1.1.jar:/home/hadoop/hbase/lib/jetty-6.1.26.jar:/home/hadoop/hbase/lib/jetty-util-6.1.26.jar:/home/hadoop/hbase/lib/jruby-complete-1.6.5.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/libthrift-0.7.0.jar:/home/hadoop/hbase/lib/log4j-1.2.16.jar:/home/hadoop/hbase/lib/netty-3.2.4.Final.jar:/home/hadoop/hbase/lib/protobuf-java-2.4.0a.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/servlet-api-2.5.jar:/home/hadoop/hbase/lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/home/hadoop/hbase/lib/stax-api-1.0.1.jar:/home/hadoop/hbase/lib/velocity-1.7.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/zookeeper-3.4.3.jar:
2012-10-01 19:54:12,222 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client
environment:java.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64
2012-10-01 19:54:12,222 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2012-10-01 19:54:12,222 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2012-10-01 19:54:12,222 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
2012-10-01 19:54:12,222 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
2012-10-01 19:54:12,222 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client
environment:os.version=2.6.35-30-generic
2012-10-01 19:54:12,222 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client environment:user.name=hadoop
2012-10-01 19:54:12,222 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client
environment:user.home=/home/hadoop/
2012-10-01 19:54:12,222 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Client
environment:user.dir=/home/gregross
2012-10-01 19:54:12,225 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Initiating client connection,
connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
sessionTimeout=180000 watcher=regionserver:60020
2012-10-01 19:54:12,251 [regionserver60020-SendThread()] INFO
org.apache.zookeeper.ClientCnxn: Opening socket connection to server
/10.100.102.197:2181
2012-10-01 19:54:12,252 [regionserver60020] INFO
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
of this process is 15403@data3024.ngpipes.milp.ngmoco.com
2012-10-01 19:54:12,259
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
SASL-authenticate because the default JAAS configuration section
'Client' could not be found. If you are not using SASL, you may ignore
this. On the other hand, if you expected SASL to work, please fix your
JAAS configuration.
2012-10-01 19:54:12,260
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
session
2012-10-01 19:54:12,272
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
server; r-o mode will be unavailable
2012-10-01 19:54:12,273
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
sessionid = 0x137ec64373dd4b5, negotiated timeout = 40000
2012-10-01 19:54:12,289 [main] INFO
org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown
hook thread: Shutdownhook:regionserver60020
2012-10-01 19:54:12,352 [regionserver60020] INFO
org.apache.zookeeper.ZooKeeper: Initiating client connection,
connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
sessionTimeout=180000 watcher=hconnection
2012-10-01 19:54:12,353 [regionserver60020-SendThread()] INFO
org.apache.zookeeper.ClientCnxn: Opening socket connection to server
/10.100.102.197:2181
2012-10-01 19:54:12,353 [regionserver60020] INFO
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
of this process is 15403@data3024.ngpipes.milp.ngmoco.com
2012-10-01 19:54:12,354
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
SASL-authenticate because the default JAAS configuration section
'Client' could not be found. If you are not using SASL, you may ignore
this. On the other hand, if you expected SASL to work, please fix your
JAAS configuration.
2012-10-01 19:54:12,354
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
session
2012-10-01 19:54:12,361
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
server; r-o mode will be unavailable
2012-10-01 19:54:12,361
[regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
sessionid = 0x137ec64373dd4b6, negotiated timeout = 40000
2012-10-01 19:54:12,384 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
globalMemStoreLimit=1.6g, globalMemStoreLimitLowMark=1.4g,
maxHeap=3.9g
2012-10-01 19:54:12,400 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Runs every 2hrs,
46mins, 40sec
2012-10-01 19:54:12,420 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect
to Master server at
namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915
2012-10-01 19:54:12,453 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to
master at data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020
2012-10-01 19:54:12,453 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at
namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915 that we are
up with port=60020, startcode=1349121252040
2012-10-01 19:54:12,476 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us
hostname to use. Was=data3024.ngpipes.milp.ngmoco.com,
Now=data3024.ngpipes.milp.ngmoco.com
2012-10-01 19:54:12,568 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.wal.HLog: HLog configuration:
blocksize=64 MB, rollsize=60.8 MB, enabled=true,
optionallogflushinternal=1000ms
2012-10-01 19:54:12,642 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.wal.HLog:  for
/hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1349121252040/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1349121252040.1349121252569
2012-10-01 19:54:12,643 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.wal.HLog: Using
getNumCurrentReplicas--HDFS-826
2012-10-01 19:54:12,651 [regionserver60020] INFO
org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
with processName=RegionServer, sessionId=regionserver60020
2012-10-01 19:54:12,656 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: MetricsString added: revision
2012-10-01 19:54:12,656 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser
2012-10-01 19:54:12,656 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate
2012-10-01 19:54:12,656 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl
2012-10-01 19:54:12,656 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: MetricsString added: date
2012-10-01 19:54:12,656 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision
2012-10-01 19:54:12,656 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: MetricsString added: user
2012-10-01 19:54:12,656 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion
2012-10-01 19:54:12,656 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: MetricsString added: url
2012-10-01 19:54:12,656 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: MetricsString added: version
2012-10-01 19:54:12,656 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-10-01 19:54:12,657 [regionserver60020] INFO
org.apache.hadoop.hbase.metrics: new MBeanInfo
2012-10-01 19:54:12,657 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics:
Initialized
2012-10-01 19:54:12,722 [regionserver60020] INFO org.mortbay.log:
Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2012-10-01 19:54:12,774 [regionserver60020] INFO
org.apache.hadoop.http.HttpServer: Added global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2012-10-01 19:54:12,787 [regionserver60020] INFO
org.apache.hadoop.http.HttpServer: Port returned by
webServer.getConnectors()[0].getLocalPort() before open() is -1.
Opening the listener on 60030
2012-10-01 19:54:12,787 [regionserver60020] INFO
org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned
60030 webServer.getConnectors()[0].getLocalPort() returned 60030
2012-10-01 19:54:12,787 [regionserver60020] INFO
org.apache.hadoop.http.HttpServer: Jetty bound to port 60030
2012-10-01 19:54:12,787 [regionserver60020] INFO org.mortbay.log: jetty-6.1.26
2012-10-01 19:54:13,079 [regionserver60020] INFO org.mortbay.log:
Started SelectChannelConnector@0.0.0.0:60030
2012-10-01 19:54:13,079 [IPC Server Responder] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting
2012-10-01 19:54:13,079 [IPC Server listener on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60020:
starting
2012-10-01 19:54:13,094 [IPC Server handler 0 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
starting
2012-10-01 19:54:13,094 [IPC Server handler 1 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
starting
2012-10-01 19:54:13,095 [IPC Server handler 2 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
starting
2012-10-01 19:54:13,095 [IPC Server handler 3 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
starting
2012-10-01 19:54:13,095 [IPC Server handler 4 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
starting
2012-10-01 19:54:13,095 [IPC Server handler 5 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
starting
2012-10-01 19:54:13,095 [IPC Server handler 6 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
starting
2012-10-01 19:54:13,095 [IPC Server handler 7 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
starting
2012-10-01 19:54:13,095 [IPC Server handler 8 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
starting
2012-10-01 19:54:13,096 [IPC Server handler 9 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
starting
2012-10-01 19:54:13,096 [IPC Server handler 10 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
starting
2012-10-01 19:54:13,096 [IPC Server handler 11 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
starting
2012-10-01 19:54:13,096 [IPC Server handler 12 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
starting
2012-10-01 19:54:13,096 [IPC Server handler 13 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
starting
2012-10-01 19:54:13,096 [IPC Server handler 14 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
starting
2012-10-01 19:54:13,097 [IPC Server handler 15 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
starting
2012-10-01 19:54:13,097 [IPC Server handler 16 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
starting
2012-10-01 19:54:13,097 [IPC Server handler 17 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
starting
2012-10-01 19:54:13,097 [IPC Server handler 18 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
starting
2012-10-01 19:54:13,098 [IPC Server handler 19 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
starting
2012-10-01 19:54:13,098 [IPC Server handler 20 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
starting
2012-10-01 19:54:13,098 [IPC Server handler 21 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
starting
2012-10-01 19:54:13,098 [IPC Server handler 22 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
starting
2012-10-01 19:54:13,098 [IPC Server handler 23 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
starting
2012-10-01 19:54:13,098 [IPC Server handler 24 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
starting
2012-10-01 19:54:13,098 [IPC Server handler 25 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
starting
2012-10-01 19:54:13,099 [IPC Server handler 26 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
starting
2012-10-01 19:54:13,099 [IPC Server handler 27 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
starting
2012-10-01 19:54:13,099 [IPC Server handler 28 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
starting
2012-10-01 19:54:13,100 [IPC Server handler 29 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
starting
2012-10-01 19:54:13,101 [IPC Server handler 30 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
starting
2012-10-01 19:54:13,101 [IPC Server handler 31 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
starting
2012-10-01 19:54:13,101 [IPC Server handler 32 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
starting
2012-10-01 19:54:13,101 [IPC Server handler 33 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
starting
2012-10-01 19:54:13,101 [IPC Server handler 34 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
starting
2012-10-01 19:54:13,102 [IPC Server handler 35 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
starting
2012-10-01 19:54:13,102 [IPC Server handler 36 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
starting
2012-10-01 19:54:13,102 [IPC Server handler 37 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
starting
2012-10-01 19:54:13,102 [IPC Server handler 38 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
starting
2012-10-01 19:54:13,102 [IPC Server handler 39 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
starting
2012-10-01 19:54:13,102 [IPC Server handler 40 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
starting
2012-10-01 19:54:13,103 [IPC Server handler 41 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
starting
2012-10-01 19:54:13,103 [IPC Server handler 42 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
starting
2012-10-01 19:54:13,103 [IPC Server handler 43 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
starting
2012-10-01 19:54:13,103 [IPC Server handler 44 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
starting
2012-10-01 19:54:13,103 [IPC Server handler 45 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
starting
2012-10-01 19:54:13,103 [IPC Server handler 46 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
starting
2012-10-01 19:54:13,103 [IPC Server handler 47 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
starting
2012-10-01 19:54:13,103 [IPC Server handler 48 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
starting
2012-10-01 19:54:13,104 [IPC Server handler 49 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
starting
2012-10-01 19:54:13,104 [IPC Server handler 50 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
starting
2012-10-01 19:54:13,104 [IPC Server handler 51 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
starting
2012-10-01 19:54:13,104 [IPC Server handler 52 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
starting
2012-10-01 19:54:13,104 [IPC Server handler 53 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
starting
2012-10-01 19:54:13,104 [IPC Server handler 54 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
starting
2012-10-01 19:54:13,104 [IPC Server handler 55 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
starting
2012-10-01 19:54:13,104 [IPC Server handler 56 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
starting
2012-10-01 19:54:13,104 [IPC Server handler 57 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
starting
2012-10-01 19:54:13,105 [IPC Server handler 58 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
starting
2012-10-01 19:54:13,105 [IPC Server handler 59 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
starting
2012-10-01 19:54:13,105 [IPC Server handler 60 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
starting
2012-10-01 19:54:13,105 [IPC Server handler 61 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
starting
2012-10-01 19:54:13,105 [IPC Server handler 62 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
starting
2012-10-01 19:54:13,105 [IPC Server handler 63 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
starting
2012-10-01 19:54:13,105 [IPC Server handler 64 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
starting
2012-10-01 19:54:13,105 [IPC Server handler 65 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
starting
2012-10-01 19:54:13,105 [IPC Server handler 66 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
starting
2012-10-01 19:54:13,106 [IPC Server handler 67 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
starting
2012-10-01 19:54:13,106 [IPC Server handler 68 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
starting
2012-10-01 19:54:13,106 [IPC Server handler 69 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
starting
2012-10-01 19:54:13,106 [IPC Server handler 70 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
starting
2012-10-01 19:54:13,106 [IPC Server handler 71 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
starting
2012-10-01 19:54:13,106 [IPC Server handler 72 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
starting
2012-10-01 19:54:13,106 [IPC Server handler 73 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
starting
2012-10-01 19:54:13,106 [IPC Server handler 74 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
starting
2012-10-01 19:54:13,107 [IPC Server handler 75 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
starting
2012-10-01 19:54:13,107 [IPC Server handler 76 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
starting
2012-10-01 19:54:13,107 [IPC Server handler 77 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
starting
2012-10-01 19:54:13,107 [IPC Server handler 78 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
starting
2012-10-01 19:54:13,107 [IPC Server handler 79 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
starting
2012-10-01 19:54:13,107 [IPC Server handler 80 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
starting
2012-10-01 19:54:13,107 [IPC Server handler 81 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
starting
2012-10-01 19:54:13,107 [IPC Server handler 82 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
starting
2012-10-01 19:54:13,107 [IPC Server handler 83 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
starting
2012-10-01 19:54:13,108 [IPC Server handler 84 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
starting
2012-10-01 19:54:13,108 [IPC Server handler 85 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
starting
2012-10-01 19:54:13,108 [IPC Server handler 86 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
starting
2012-10-01 19:54:13,108 [IPC Server handler 87 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
starting
2012-10-01 19:54:13,108 [IPC Server handler 88 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
starting
2012-10-01 19:54:13,108 [IPC Server handler 89 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
starting
2012-10-01 19:54:13,109 [IPC Server handler 90 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
starting
2012-10-01 19:54:13,109 [IPC Server handler 91 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
starting
2012-10-01 19:54:13,109 [IPC Server handler 92 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
starting
2012-10-01 19:54:13,109 [IPC Server handler 93 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
starting
2012-10-01 19:54:13,109 [IPC Server handler 94 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
starting
2012-10-01 19:54:13,109 [IPC Server handler 95 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
starting
2012-10-01 19:54:13,110 [IPC Server handler 96 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
starting
2012-10-01 19:54:13,110 [IPC Server handler 97 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
starting
2012-10-01 19:54:13,110 [IPC Server handler 98 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
starting
2012-10-01 19:54:13,110 [IPC Server handler 99 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
starting
2012-10-01 19:54:13,110 [PRI IPC Server handler 0 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
starting
2012-10-01 19:54:13,110 [PRI IPC Server handler 1 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
starting
2012-10-01 19:54:13,110 [PRI IPC Server handler 2 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
starting
2012-10-01 19:54:13,111 [PRI IPC Server handler 3 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
starting
2012-10-01 19:54:13,111 [PRI IPC Server handler 4 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
starting
2012-10-01 19:54:13,111 [PRI IPC Server handler 5 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
starting
2012-10-01 19:54:13,111 [PRI IPC Server handler 6 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
starting
2012-10-01 19:54:13,111 [PRI IPC Server handler 7 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
starting
2012-10-01 19:54:13,111 [PRI IPC Server handler 8 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
starting
2012-10-01 19:54:13,111 [PRI IPC Server handler 9 on 60020] INFO
org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
starting
2012-10-01 19:54:13,124 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as
data3024.ngpipes.milp.ngmoco.com,60020,1349121252040, RPC listening on
data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020,
sessionid=0x137ec64373dd4b5
2012-10-01 19:54:13,124
[SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1349121252040]
INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1349121252040
starting
2012-10-01 19:54:13,125 [regionserver60020] INFO
org.apache.hadoop.hbase.regionserver.HRegionServer: Registered
RegionServer MXBean

GC log
======

1.914: [GC 1.914: [ParNew: 99976K->7646K(118016K), 0.0087130 secs]
99976K->7646K(123328K), 0.0088110 secs] [Times: user=0.07 sys=0.00,
real=0.00 secs]
416.341: [GC 416.341: [ParNew: 112558K->12169K(118016K), 0.0447760
secs] 112558K->25025K(133576K), 0.0450080 secs] [Times: user=0.13
sys=0.02, real=0.05 secs]
416.386: [GC [1 CMS-initial-mark: 12855K(15560K)] 25089K(133576K),
0.0037570 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
416.390: [CMS-concurrent-mark-start]
416.407: [CMS-concurrent-mark: 0.015/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
416.407: [CMS-concurrent-preclean-start]
416.408: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
416.408: [GC[YG occupancy: 12233 K (118016 K)]416.408: [Rescan
(parallel) , 0.0074970 secs]416.416: [weak refs processing, 0.0000370
secs] [1 CMS-remark: 12855K(15560K)] 25089K(133576K), 0.0076480 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
416.416: [CMS-concurrent-sweep-start]
416.419: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
416.419: [CMS-concurrent-reset-start]
416.467: [CMS-concurrent-reset: 0.049/0.049 secs] [Times: user=0.01
sys=0.04, real=0.05 secs]
418.468: [GC [1 CMS-initial-mark: 12855K(21428K)] 26216K(139444K),
0.0037020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
418.471: [CMS-concurrent-mark-start]
418.487: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
418.487: [CMS-concurrent-preclean-start]
418.488: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
418.488: [GC[YG occupancy: 13360 K (118016 K)]418.488: [Rescan
(parallel) , 0.0090770 secs]418.497: [weak refs processing, 0.0000170
secs] [1 CMS-remark: 12855K(21428K)] 26216K(139444K), 0.0092220 secs]
[Times: user=0.08 sys=0.00, real=0.01 secs]
418.497: [CMS-concurrent-sweep-start]
418.500: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
418.500: [CMS-concurrent-reset-start]
418.511: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
420.512: [GC [1 CMS-initial-mark: 12854K(21428K)] 26344K(139444K),
0.0041050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
420.516: [CMS-concurrent-mark-start]
420.532: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
sys=0.01, real=0.01 secs]
420.532: [CMS-concurrent-preclean-start]
420.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
420.533: [GC[YG occupancy: 13489 K (118016 K)]420.533: [Rescan
(parallel) , 0.0014850 secs]420.534: [weak refs processing, 0.0000130
secs] [1 CMS-remark: 12854K(21428K)] 26344K(139444K), 0.0015920 secs]
[Times: user=0.01 sys=0.00, real=0.01 secs]
420.534: [CMS-concurrent-sweep-start]
420.537: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
420.537: [CMS-concurrent-reset-start]
420.548: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
422.437: [GC [1 CMS-initial-mark: 12854K(21428K)] 28692K(139444K),
0.0051030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
422.443: [CMS-concurrent-mark-start]
422.458: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
422.458: [CMS-concurrent-preclean-start]
422.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
422.458: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 427.541:
[CMS-concurrent-abortable-preclean: 0.678/5.083 secs] [Times:
user=0.66 sys=0.00, real=5.08 secs]
427.541: [GC[YG occupancy: 16198 K (118016 K)]427.541: [Rescan
(parallel) , 0.0013750 secs]427.543: [weak refs processing, 0.0000140
secs] [1 CMS-remark: 12854K(21428K)] 29053K(139444K), 0.0014800 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
427.543: [CMS-concurrent-sweep-start]
427.544: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
427.544: [CMS-concurrent-reset-start]
427.557: [CMS-concurrent-reset: 0.013/0.013 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
429.557: [GC [1 CMS-initial-mark: 12854K(21428K)] 30590K(139444K),
0.0043280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
429.562: [CMS-concurrent-mark-start]
429.574: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
sys=0.00, real=0.02 secs]
429.574: [CMS-concurrent-preclean-start]
429.575: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
429.575: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 434.626:
[CMS-concurrent-abortable-preclean: 0.747/5.051 secs] [Times:
user=0.74 sys=0.00, real=5.05 secs]
434.626: [GC[YG occupancy: 18154 K (118016 K)]434.626: [Rescan
(parallel) , 0.0015440 secs]434.627: [weak refs processing, 0.0000140
secs] [1 CMS-remark: 12854K(21428K)] 31009K(139444K), 0.0016500 secs]
[Times: user=0.00 sys=0.00, real=0.00 secs]
434.628: [CMS-concurrent-sweep-start]
434.629: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
434.629: [CMS-concurrent-reset-start]
434.641: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
436.641: [GC [1 CMS-initial-mark: 12854K(21428K)] 31137K(139444K),
0.0043440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
436.646: [CMS-concurrent-mark-start]
436.660: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
436.660: [CMS-concurrent-preclean-start]
436.661: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
436.661: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 441.773:
[CMS-concurrent-abortable-preclean: 0.608/5.112 secs] [Times:
user=0.60 sys=0.00, real=5.11 secs]
441.773: [GC[YG occupancy: 18603 K (118016 K)]441.773: [Rescan
(parallel) , 0.0024270 secs]441.776: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12854K(21428K)] 31458K(139444K), 0.0025200 secs]
[Times: user=0.01 sys=0.00, real=0.01 secs]
441.776: [CMS-concurrent-sweep-start]
441.777: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
441.777: [CMS-concurrent-reset-start]
441.788: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
443.788: [GC [1 CMS-initial-mark: 12854K(21428K)] 31586K(139444K),
0.0044590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
443.793: [CMS-concurrent-mark-start]
443.804: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.04
sys=0.00, real=0.02 secs]
443.804: [CMS-concurrent-preclean-start]
443.805: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
443.805: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 448.821:
[CMS-concurrent-abortable-preclean: 0.813/5.016 secs] [Times:
user=0.81 sys=0.00, real=5.01 secs]
448.822: [GC[YG occupancy: 19052 K (118016 K)]448.822: [Rescan
(parallel) , 0.0013990 secs]448.823: [weak refs processing, 0.0000140
secs] [1 CMS-remark: 12854K(21428K)] 31907K(139444K), 0.0015040 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
448.823: [CMS-concurrent-sweep-start]
448.825: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
448.825: [CMS-concurrent-reset-start]
448.837: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
450.837: [GC [1 CMS-initial-mark: 12854K(21428K)] 32035K(139444K),
0.0044510 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
450.842: [CMS-concurrent-mark-start]
450.857: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
450.857: [CMS-concurrent-preclean-start]
450.858: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
450.858: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 455.922:
[CMS-concurrent-abortable-preclean: 0.726/5.064 secs] [Times:
user=0.73 sys=0.00, real=5.06 secs]
455.922: [GC[YG occupancy: 19542 K (118016 K)]455.922: [Rescan
(parallel) , 0.0016050 secs]455.924: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12854K(21428K)] 32397K(139444K), 0.0017340 secs]
[Times: user=0.02 sys=0.00, real=0.01 secs]
455.924: [CMS-concurrent-sweep-start]
455.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
455.927: [CMS-concurrent-reset-start]
455.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
457.936: [GC [1 CMS-initial-mark: 12854K(21428K)] 32525K(139444K),
0.0026740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
457.939: [CMS-concurrent-mark-start]
457.950: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
457.950: [CMS-concurrent-preclean-start]
457.950: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
457.950: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 463.065:
[CMS-concurrent-abortable-preclean: 0.708/5.115 secs] [Times:
user=0.71 sys=0.00, real=5.12 secs]
463.066: [GC[YG occupancy: 19991 K (118016 K)]463.066: [Rescan
(parallel) , 0.0013940 secs]463.067: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12854K(21428K)] 32846K(139444K), 0.0015000 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
463.067: [CMS-concurrent-sweep-start]
463.070: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
463.070: [CMS-concurrent-reset-start]
463.080: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
465.080: [GC [1 CMS-initial-mark: 12854K(21428K)] 32974K(139444K),
0.0027070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
465.083: [CMS-concurrent-mark-start]
465.096: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
465.096: [CMS-concurrent-preclean-start]
465.096: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
465.096: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 470.123:
[CMS-concurrent-abortable-preclean: 0.723/5.027 secs] [Times:
user=0.71 sys=0.00, real=5.03 secs]
470.124: [GC[YG occupancy: 20440 K (118016 K)]470.124: [Rescan
(parallel) , 0.0011990 secs]470.125: [weak refs processing, 0.0000130
secs] [1 CMS-remark: 12854K(21428K)] 33295K(139444K), 0.0012990 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
470.125: [CMS-concurrent-sweep-start]
470.127: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
470.127: [CMS-concurrent-reset-start]
470.137: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
472.137: [GC [1 CMS-initial-mark: 12854K(21428K)] 33423K(139444K),
0.0041330 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
472.141: [CMS-concurrent-mark-start]
472.155: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.02 secs]
472.155: [CMS-concurrent-preclean-start]
472.156: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
472.156: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 477.179:
[CMS-concurrent-abortable-preclean: 0.618/5.023 secs] [Times:
user=0.62 sys=0.00, real=5.02 secs]
477.179: [GC[YG occupancy: 20889 K (118016 K)]477.179: [Rescan
(parallel) , 0.0014510 secs]477.180: [weak refs processing, 0.0000090
secs] [1 CMS-remark: 12854K(21428K)] 33744K(139444K), 0.0015250 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
477.181: [CMS-concurrent-sweep-start]
477.183: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
477.183: [CMS-concurrent-reset-start]
477.192: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
479.192: [GC [1 CMS-initial-mark: 12854K(21428K)] 33872K(139444K),
0.0039730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
479.196: [CMS-concurrent-mark-start]
479.209: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
479.209: [CMS-concurrent-preclean-start]
479.210: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
479.210: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 484.295:
[CMS-concurrent-abortable-preclean: 0.757/5.085 secs] [Times:
user=0.77 sys=0.00, real=5.09 secs]
484.295: [GC[YG occupancy: 21583 K (118016 K)]484.295: [Rescan
(parallel) , 0.0013210 secs]484.297: [weak refs processing, 0.0000150
secs] [1 CMS-remark: 12854K(21428K)] 34438K(139444K), 0.0014200 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
484.297: [CMS-concurrent-sweep-start]
484.298: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
484.298: [CMS-concurrent-reset-start]
484.307: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
486.308: [GC [1 CMS-initial-mark: 12854K(21428K)] 34566K(139444K),
0.0041800 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
486.312: [CMS-concurrent-mark-start]
486.324: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
sys=0.00, real=0.02 secs]
486.324: [CMS-concurrent-preclean-start]
486.324: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
486.324: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 491.394:
[CMS-concurrent-abortable-preclean: 0.565/5.070 secs] [Times:
user=0.56 sys=0.00, real=5.06 secs]
491.394: [GC[YG occupancy: 22032 K (118016 K)]491.395: [Rescan
(parallel) , 0.0018030 secs]491.396: [weak refs processing, 0.0000090
secs] [1 CMS-remark: 12854K(21428K)] 34887K(139444K), 0.0018830 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
491.397: [CMS-concurrent-sweep-start]
491.398: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
491.398: [CMS-concurrent-reset-start]
491.406: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
493.407: [GC [1 CMS-initial-mark: 12854K(21428K)] 35080K(139444K),
0.0027620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
493.410: [CMS-concurrent-mark-start]
493.420: [CMS-concurrent-mark: 0.010/0.010 secs] [Times: user=0.04
sys=0.00, real=0.01 secs]
493.420: [CMS-concurrent-preclean-start]
493.420: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
493.420: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 498.525:
[CMS-concurrent-abortable-preclean: 0.600/5.106 secs] [Times:
user=0.61 sys=0.00, real=5.11 secs]
498.526: [GC[YG occupancy: 22545 K (118016 K)]498.526: [Rescan
(parallel) , 0.0019450 secs]498.528: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12854K(21428K)] 35400K(139444K), 0.0020460 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
498.528: [CMS-concurrent-sweep-start]
498.530: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
498.530: [CMS-concurrent-reset-start]
498.538: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
500.538: [GC [1 CMS-initial-mark: 12854K(21428K)] 35529K(139444K),
0.0027790 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
500.541: [CMS-concurrent-mark-start]
500.554: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
500.554: [CMS-concurrent-preclean-start]
500.554: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
500.554: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 505.616:
[CMS-concurrent-abortable-preclean: 0.557/5.062 secs] [Times:
user=0.56 sys=0.00, real=5.06 secs]
505.617: [GC[YG occupancy: 22995 K (118016 K)]505.617: [Rescan
(parallel) , 0.0023440 secs]505.619: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12854K(21428K)] 35850K(139444K), 0.0024280 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
505.619: [CMS-concurrent-sweep-start]
505.621: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
505.621: [CMS-concurrent-reset-start]
505.629: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
507.630: [GC [1 CMS-initial-mark: 12854K(21428K)] 35978K(139444K),
0.0027500 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
507.632: [CMS-concurrent-mark-start]
507.645: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
sys=0.00, real=0.02 secs]
507.645: [CMS-concurrent-preclean-start]
507.646: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
507.646: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 512.697:
[CMS-concurrent-abortable-preclean: 0.562/5.051 secs] [Times:
user=0.57 sys=0.00, real=5.05 secs]
512.697: [GC[YG occupancy: 23484 K (118016 K)]512.697: [Rescan
(parallel) , 0.0020030 secs]512.699: [weak refs processing, 0.0000090
secs] [1 CMS-remark: 12854K(21428K)] 36339K(139444K), 0.0020830 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
512.700: [CMS-concurrent-sweep-start]
512.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
512.701: [CMS-concurrent-reset-start]
512.709: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
514.710: [GC [1 CMS-initial-mark: 12854K(21428K)] 36468K(139444K),
0.0028400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
514.713: [CMS-concurrent-mark-start]
514.725: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
sys=0.00, real=0.02 secs]
514.725: [CMS-concurrent-preclean-start]
514.725: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
514.725: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 519.800:
[CMS-concurrent-abortable-preclean: 0.619/5.075 secs] [Times:
user=0.66 sys=0.00, real=5.07 secs]
519.801: [GC[YG occupancy: 25022 K (118016 K)]519.801: [Rescan
(parallel) , 0.0023950 secs]519.803: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12854K(21428K)] 37877K(139444K), 0.0024980 secs]
[Times: user=0.02 sys=0.00, real=0.01 secs]
519.803: [CMS-concurrent-sweep-start]
519.805: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
519.805: [CMS-concurrent-reset-start]
519.813: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
521.814: [GC [1 CMS-initial-mark: 12854K(21428K)] 38005K(139444K),
0.0045520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
521.818: [CMS-concurrent-mark-start]
521.833: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
521.833: [CMS-concurrent-preclean-start]
521.833: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
521.833: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 526.840:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
526.840: [GC[YG occupancy: 25471 K (118016 K)]526.840: [Rescan
(parallel) , 0.0024440 secs]526.843: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12854K(21428K)] 38326K(139444K), 0.0025440 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
526.843: [CMS-concurrent-sweep-start]
526.845: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
526.845: [CMS-concurrent-reset-start]
526.853: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
528.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 38449K(139444K),
0.0045550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
528.858: [CMS-concurrent-mark-start]
528.872: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
528.872: [CMS-concurrent-preclean-start]
528.873: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
528.873: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 533.876:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
533.876: [GC[YG occupancy: 25919 K (118016 K)]533.877: [Rescan
(parallel) , 0.0028370 secs]533.879: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 38769K(139444K), 0.0029390 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
533.880: [CMS-concurrent-sweep-start]
533.882: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
533.882: [CMS-concurrent-reset-start]
533.891: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
535.891: [GC [1 CMS-initial-mark: 12849K(21428K)] 38897K(139444K),
0.0046460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
535.896: [CMS-concurrent-mark-start]
535.910: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
535.910: [CMS-concurrent-preclean-start]
535.911: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
535.911: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 540.917:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
540.917: [GC[YG occupancy: 26367 K (118016 K)]540.917: [Rescan
(parallel) , 0.0025680 secs]540.920: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 39217K(139444K), 0.0026690 secs]
[Times: user=0.03 sys=0.00, real=0.00 secs]
540.920: [CMS-concurrent-sweep-start]
540.922: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
540.922: [CMS-concurrent-reset-start]
540.930: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
542.466: [GC [1 CMS-initial-mark: 12849K(21428K)] 39555K(139444K),
0.0050040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
542.471: [CMS-concurrent-mark-start]
542.486: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.05
sys=0.00, real=0.02 secs]
542.486: [CMS-concurrent-preclean-start]
542.486: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
542.486: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 547.491:
[CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
547.491: [GC[YG occupancy: 27066 K (118016 K)]547.491: [Rescan
(parallel) , 0.0024720 secs]547.494: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 39916K(139444K), 0.0025720 secs]
[Times: user=0.02 sys=0.00, real=0.01 secs]
547.494: [CMS-concurrent-sweep-start]
547.496: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
547.496: [CMS-concurrent-reset-start]
547.505: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
549.506: [GC [1 CMS-initial-mark: 12849K(21428K)] 40044K(139444K),
0.0048760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
549.511: [CMS-concurrent-mark-start]
549.524: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
549.524: [CMS-concurrent-preclean-start]
549.525: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
549.525: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 554.530:
[CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
554.530: [GC[YG occupancy: 27515 K (118016 K)]554.530: [Rescan
(parallel) , 0.0025270 secs]554.533: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 40364K(139444K), 0.0026190 secs]
[Times: user=0.03 sys=0.00, real=0.00 secs]
554.533: [CMS-concurrent-sweep-start]
554.534: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
554.534: [CMS-concurrent-reset-start]
554.542: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
556.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 40493K(139444K),
0.0048950 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
556.548: [CMS-concurrent-mark-start]
556.562: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
556.562: [CMS-concurrent-preclean-start]
556.562: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
556.563: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 561.565:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
561.566: [GC[YG occupancy: 27963 K (118016 K)]561.566: [Rescan
(parallel) , 0.0025900 secs]561.568: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 40813K(139444K), 0.0026910 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
561.569: [CMS-concurrent-sweep-start]
561.570: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
561.570: [CMS-concurrent-reset-start]
561.578: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
563.579: [GC [1 CMS-initial-mark: 12849K(21428K)] 40941K(139444K),
0.0049390 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
563.584: [CMS-concurrent-mark-start]
563.598: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
563.598: [CMS-concurrent-preclean-start]
563.598: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
563.598: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 568.693:
[CMS-concurrent-abortable-preclean: 0.717/5.095 secs] [Times:
user=0.71 sys=0.00, real=5.09 secs]
568.694: [GC[YG occupancy: 28411 K (118016 K)]568.694: [Rescan
(parallel) , 0.0035750 secs]568.697: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 41261K(139444K), 0.0036740 secs]
[Times: user=0.03 sys=0.00, real=0.00 secs]
568.698: [CMS-concurrent-sweep-start]
568.700: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
568.700: [CMS-concurrent-reset-start]
568.709: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
570.709: [GC [1 CMS-initial-mark: 12849K(21428K)] 41389K(139444K),
0.0048710 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
570.714: [CMS-concurrent-mark-start]
570.729: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
570.729: [CMS-concurrent-preclean-start]
570.729: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
570.729: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 575.738:
[CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
575.738: [GC[YG occupancy: 28900 K (118016 K)]575.738: [Rescan
(parallel) , 0.0036390 secs]575.742: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 41750K(139444K), 0.0037440 secs]
[Times: user=0.03 sys=0.00, real=0.00 secs]
575.742: [CMS-concurrent-sweep-start]
575.744: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
575.744: [CMS-concurrent-reset-start]
575.752: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
577.752: [GC [1 CMS-initial-mark: 12849K(21428K)] 41878K(139444K),
0.0050100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
577.758: [CMS-concurrent-mark-start]
577.772: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
577.772: [CMS-concurrent-preclean-start]
577.773: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
577.773: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 582.779:
[CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
582.779: [GC[YG occupancy: 29348 K (118016 K)]582.779: [Rescan
(parallel) , 0.0026100 secs]582.782: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 42198K(139444K), 0.0027110 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
582.782: [CMS-concurrent-sweep-start]
582.784: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
582.784: [CMS-concurrent-reset-start]
582.792: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
584.792: [GC [1 CMS-initial-mark: 12849K(21428K)] 42326K(139444K),
0.0050510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
584.798: [CMS-concurrent-mark-start]
584.812: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
584.812: [CMS-concurrent-preclean-start]
584.813: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
584.813: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 589.819:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
589.819: [GC[YG occupancy: 29797 K (118016 K)]589.819: [Rescan
(parallel) , 0.0039510 secs]589.823: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 42647K(139444K), 0.0040460 secs]
[Times: user=0.03 sys=0.00, real=0.01 secs]
589.824: [CMS-concurrent-sweep-start]
589.826: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
589.826: [CMS-concurrent-reset-start]
589.835: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
591.835: [GC [1 CMS-initial-mark: 12849K(21428K)] 42775K(139444K),
0.0050090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
591.840: [CMS-concurrent-mark-start]
591.855: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
591.855: [CMS-concurrent-preclean-start]
591.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
591.855: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 596.857:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
596.857: [GC[YG occupancy: 31414 K (118016 K)]596.857: [Rescan
(parallel) , 0.0028500 secs]596.860: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 44264K(139444K), 0.0029480 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
596.861: [CMS-concurrent-sweep-start]
596.862: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
596.862: [CMS-concurrent-reset-start]
596.870: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
598.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 44392K(139444K),
0.0050640 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
598.876: [CMS-concurrent-mark-start]
598.890: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
598.890: [CMS-concurrent-preclean-start]
598.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
598.891: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 603.897:
[CMS-concurrent-abortable-preclean: 0.705/5.007 secs] [Times:
user=0.72 sys=0.00, real=5.01 secs]
603.898: [GC[YG occupancy: 32032 K (118016 K)]603.898: [Rescan
(parallel) , 0.0039660 secs]603.902: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 44882K(139444K), 0.0040680 secs]
[Times: user=0.04 sys=0.00, real=0.00 secs]
603.902: [CMS-concurrent-sweep-start]
603.903: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
603.903: [CMS-concurrent-reset-start]
603.912: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
605.912: [GC [1 CMS-initial-mark: 12849K(21428K)] 45010K(139444K),
0.0053650 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
605.918: [CMS-concurrent-mark-start]
605.932: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
605.932: [CMS-concurrent-preclean-start]
605.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
605.932: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 610.939:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
610.940: [GC[YG occupancy: 32481 K (118016 K)]610.940: [Rescan
(parallel) , 0.0032540 secs]610.943: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 45330K(139444K), 0.0033560 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
610.943: [CMS-concurrent-sweep-start]
610.944: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
610.945: [CMS-concurrent-reset-start]
610.953: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
612.486: [GC [1 CMS-initial-mark: 12849K(21428K)] 45459K(139444K),
0.0055070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
612.492: [CMS-concurrent-mark-start]
612.505: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
612.505: [CMS-concurrent-preclean-start]
612.506: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
612.506: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 617.511:
[CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
617.512: [GC[YG occupancy: 32929 K (118016 K)]617.512: [Rescan
(parallel) , 0.0037500 secs]617.516: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 45779K(139444K), 0.0038560 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
617.516: [CMS-concurrent-sweep-start]
617.518: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
617.518: [CMS-concurrent-reset-start]
617.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
619.528: [GC [1 CMS-initial-mark: 12849K(21428K)] 45907K(139444K),
0.0053320 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
619.533: [CMS-concurrent-mark-start]
619.546: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
sys=0.00, real=0.02 secs]
619.546: [CMS-concurrent-preclean-start]
619.547: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
619.547: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 624.552:
[CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
624.552: [GC[YG occupancy: 33377 K (118016 K)]624.552: [Rescan
(parallel) , 0.0037290 secs]624.556: [weak refs processing, 0.0000130
secs] [1 CMS-remark: 12849K(21428K)] 46227K(139444K), 0.0038330 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
624.556: [CMS-concurrent-sweep-start]
624.558: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
624.558: [CMS-concurrent-reset-start]
624.568: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
626.568: [GC [1 CMS-initial-mark: 12849K(21428K)] 46355K(139444K),
0.0054240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
626.574: [CMS-concurrent-mark-start]
626.588: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
626.588: [CMS-concurrent-preclean-start]
626.588: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
626.588: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 631.592:
[CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
631.592: [GC[YG occupancy: 33825 K (118016 K)]631.593: [Rescan
(parallel) , 0.0041600 secs]631.597: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 46675K(139444K), 0.0042650 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
631.597: [CMS-concurrent-sweep-start]
631.598: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
631.598: [CMS-concurrent-reset-start]
631.607: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
632.495: [GC [1 CMS-initial-mark: 12849K(21428K)] 46839K(139444K),
0.0054380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
632.501: [CMS-concurrent-mark-start]
632.516: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
632.516: [CMS-concurrent-preclean-start]
632.517: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
632.517: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 637.519:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
637.519: [GC[YG occupancy: 34350 K (118016 K)]637.519: [Rescan
(parallel) , 0.0025310 secs]637.522: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 47200K(139444K), 0.0026540 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
637.522: [CMS-concurrent-sweep-start]
637.523: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
637.523: [CMS-concurrent-reset-start]
637.532: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
639.532: [GC [1 CMS-initial-mark: 12849K(21428K)] 47328K(139444K),
0.0055330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
639.538: [CMS-concurrent-mark-start]
639.551: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
639.551: [CMS-concurrent-preclean-start]
639.552: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
639.552: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 644.561:
[CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
644.561: [GC[YG occupancy: 34798 K (118016 K)]644.561: [Rescan
(parallel) , 0.0040620 secs]644.565: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 47648K(139444K), 0.0041610 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
644.566: [CMS-concurrent-sweep-start]
644.568: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
644.568: [CMS-concurrent-reset-start]
644.577: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
646.577: [GC [1 CMS-initial-mark: 12849K(21428K)] 47776K(139444K),
0.0054660 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
646.583: [CMS-concurrent-mark-start]
646.596: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
646.596: [CMS-concurrent-preclean-start]
646.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
646.597: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 651.678:
[CMS-concurrent-abortable-preclean: 0.732/5.081 secs] [Times:
user=0.74 sys=0.00, real=5.08 secs]
651.678: [GC[YG occupancy: 35246 K (118016 K)]651.678: [Rescan
(parallel) , 0.0025920 secs]651.681: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 48096K(139444K), 0.0026910 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
651.681: [CMS-concurrent-sweep-start]
651.682: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
651.682: [CMS-concurrent-reset-start]
651.690: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
653.691: [GC [1 CMS-initial-mark: 12849K(21428K)] 48224K(139444K),
0.0055640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
653.696: [CMS-concurrent-mark-start]
653.711: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
653.711: [CMS-concurrent-preclean-start]
653.711: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
653.711: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 658.721:
[CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
658.721: [GC[YG occupancy: 35695 K (118016 K)]658.721: [Rescan
(parallel) , 0.0040160 secs]658.725: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 48545K(139444K), 0.0041130 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
658.725: [CMS-concurrent-sweep-start]
658.727: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
658.728: [CMS-concurrent-reset-start]
658.737: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
660.737: [GC [1 CMS-initial-mark: 12849K(21428K)] 48673K(139444K),
0.0055230 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
660.743: [CMS-concurrent-mark-start]
660.756: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
660.756: [CMS-concurrent-preclean-start]
660.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
660.757: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 665.767:
[CMS-concurrent-abortable-preclean: 0.704/5.011 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
665.768: [GC[YG occupancy: 36289 K (118016 K)]665.768: [Rescan
(parallel) , 0.0033040 secs]665.771: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 49139K(139444K), 0.0034090 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
665.771: [CMS-concurrent-sweep-start]
665.773: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
665.773: [CMS-concurrent-reset-start]
665.781: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
667.781: [GC [1 CMS-initial-mark: 12849K(21428K)] 49267K(139444K),
0.0057830 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
667.787: [CMS-concurrent-mark-start]
667.802: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
667.802: [CMS-concurrent-preclean-start]
667.802: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
667.802: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 672.809:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
672.810: [GC[YG occupancy: 36737 K (118016 K)]672.810: [Rescan
(parallel) , 0.0037010 secs]672.813: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 49587K(139444K), 0.0038010 secs]
[Times: user=0.03 sys=0.00, real=0.01 secs]
672.814: [CMS-concurrent-sweep-start]
672.815: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
672.815: [CMS-concurrent-reset-start]
672.824: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
674.824: [GC [1 CMS-initial-mark: 12849K(21428K)] 49715K(139444K),
0.0058920 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
674.830: [CMS-concurrent-mark-start]
674.845: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
674.845: [CMS-concurrent-preclean-start]
674.845: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
674.845: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 679.849:
[CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
679.850: [GC[YG occupancy: 37185 K (118016 K)]679.850: [Rescan
(parallel) , 0.0033420 secs]679.853: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 50035K(139444K), 0.0034440 secs]
[Times: user=0.02 sys=0.00, real=0.01 secs]
679.853: [CMS-concurrent-sweep-start]
679.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
679.855: [CMS-concurrent-reset-start]
679.863: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
681.864: [GC [1 CMS-initial-mark: 12849K(21428K)] 50163K(139444K),
0.0058780 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
681.870: [CMS-concurrent-mark-start]
681.884: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
681.884: [CMS-concurrent-preclean-start]
681.884: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
681.884: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 686.890:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
686.891: [GC[YG occupancy: 37634 K (118016 K)]686.891: [Rescan
(parallel) , 0.0044480 secs]686.895: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 50483K(139444K), 0.0045570 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
686.896: [CMS-concurrent-sweep-start]
686.897: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
686.897: [CMS-concurrent-reset-start]
686.905: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
688.905: [GC [1 CMS-initial-mark: 12849K(21428K)] 50612K(139444K),
0.0058940 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
688.911: [CMS-concurrent-mark-start]
688.925: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
688.925: [CMS-concurrent-preclean-start]
688.925: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
688.926: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 694.041:
[CMS-concurrent-abortable-preclean: 0.718/5.115 secs] [Times:
user=0.72 sys=0.00, real=5.11 secs]
694.041: [GC[YG occupancy: 38122 K (118016 K)]694.041: [Rescan
(parallel) , 0.0028640 secs]694.044: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 50972K(139444K), 0.0029660 secs]
[Times: user=0.03 sys=0.00, real=0.01 secs]
694.044: [CMS-concurrent-sweep-start]
694.046: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
694.046: [CMS-concurrent-reset-start]
694.054: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
696.054: [GC [1 CMS-initial-mark: 12849K(21428K)] 51100K(139444K),
0.0060550 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
696.060: [CMS-concurrent-mark-start]
696.074: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
696.074: [CMS-concurrent-preclean-start]
696.075: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
696.075: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 701.078:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
701.079: [GC[YG occupancy: 38571 K (118016 K)]701.079: [Rescan
(parallel) , 0.0064210 secs]701.085: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 51421K(139444K), 0.0065220 secs]
[Times: user=0.07 sys=0.00, real=0.01 secs]
701.085: [CMS-concurrent-sweep-start]
701.087: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
701.088: [CMS-concurrent-reset-start]
701.097: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
703.097: [GC [1 CMS-initial-mark: 12849K(21428K)] 51549K(139444K),
0.0058470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
703.103: [CMS-concurrent-mark-start]
703.116: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
sys=0.00, real=0.02 secs]
703.116: [CMS-concurrent-preclean-start]
703.117: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
703.117: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 708.125:
[CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
708.125: [GC[YG occupancy: 39054 K (118016 K)]708.125: [Rescan
(parallel) , 0.0037190 secs]708.129: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 51904K(139444K), 0.0038220 secs]
[Times: user=0.03 sys=0.00, real=0.00 secs]
708.129: [CMS-concurrent-sweep-start]
708.131: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
708.131: [CMS-concurrent-reset-start]
708.139: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
710.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 52032K(139444K),
0.0059770 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
710.145: [CMS-concurrent-mark-start]
710.158: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
710.158: [CMS-concurrent-preclean-start]
710.158: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
710.158: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 715.169:
[CMS-concurrent-abortable-preclean: 0.705/5.011 secs] [Times:
user=0.69 sys=0.01, real=5.01 secs]
715.169: [GC[YG occupancy: 39503 K (118016 K)]715.169: [Rescan
(parallel) , 0.0042370 secs]715.173: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 52353K(139444K), 0.0043410 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
715.174: [CMS-concurrent-sweep-start]
715.176: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
715.176: [CMS-concurrent-reset-start]
715.185: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
717.185: [GC [1 CMS-initial-mark: 12849K(21428K)] 52481K(139444K),
0.0060050 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
717.191: [CMS-concurrent-mark-start]
717.205: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
717.205: [CMS-concurrent-preclean-start]
717.206: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
717.206: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 722.209:
[CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
user=0.71 sys=0.00, real=5.00 secs]
722.210: [GC[YG occupancy: 40161 K (118016 K)]722.210: [Rescan
(parallel) , 0.0041630 secs]722.214: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 53011K(139444K), 0.0042630 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
722.214: [CMS-concurrent-sweep-start]
722.216: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
722.216: [CMS-concurrent-reset-start]
722.226: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
722.521: [GC [1 CMS-initial-mark: 12849K(21428K)] 53099K(139444K),
0.0062380 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
722.528: [CMS-concurrent-mark-start]
722.544: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.05
sys=0.01, real=0.02 secs]
722.544: [CMS-concurrent-preclean-start]
722.544: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
722.544: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 727.558:
[CMS-concurrent-abortable-preclean: 0.709/5.014 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
727.558: [GC[YG occupancy: 40610 K (118016 K)]727.558: [Rescan
(parallel) , 0.0041700 secs]727.563: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 53460K(139444K), 0.0042780 secs]
[Times: user=0.05 sys=0.00, real=0.00 secs]
727.563: [CMS-concurrent-sweep-start]
727.564: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
727.564: [CMS-concurrent-reset-start]
727.573: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.02 secs]
729.574: [GC [1 CMS-initial-mark: 12849K(21428K)] 53588K(139444K),
0.0062700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
729.580: [CMS-concurrent-mark-start]
729.595: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.02 secs]
729.595: [CMS-concurrent-preclean-start]
729.595: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
729.595: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 734.597:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
734.597: [GC[YG occupancy: 41058 K (118016 K)]734.597: [Rescan
(parallel) , 0.0053870 secs]734.603: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 53908K(139444K), 0.0054870 secs]
[Times: user=0.06 sys=0.00, real=0.00 secs]
734.603: [CMS-concurrent-sweep-start]
734.604: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
734.604: [CMS-concurrent-reset-start]
734.614: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
734.877: [GC [1 CMS-initial-mark: 12849K(21428K)] 53908K(139444K),
0.0067230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
734.884: [CMS-concurrent-mark-start]
734.899: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
734.899: [CMS-concurrent-preclean-start]
734.899: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
734.899: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 739.905:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
739.906: [GC[YG occupancy: 41379 K (118016 K)]739.906: [Rescan
(parallel) , 0.0050680 secs]739.911: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 54228K(139444K), 0.0051690 secs]
[Times: user=0.05 sys=0.00, real=0.00 secs]
739.911: [CMS-concurrent-sweep-start]
739.912: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
739.912: [CMS-concurrent-reset-start]
739.921: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
741.922: [GC [1 CMS-initial-mark: 12849K(21428K)] 54356K(139444K),
0.0062880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
741.928: [CMS-concurrent-mark-start]
741.942: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
741.942: [CMS-concurrent-preclean-start]
741.943: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
741.943: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 747.059:
[CMS-concurrent-abortable-preclean: 0.711/5.117 secs] [Times:
user=0.71 sys=0.00, real=5.12 secs]
747.060: [GC[YG occupancy: 41827 K (118016 K)]747.060: [Rescan
(parallel) , 0.0051040 secs]747.065: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 54677K(139444K), 0.0052090 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
747.065: [CMS-concurrent-sweep-start]
747.067: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
747.067: [CMS-concurrent-reset-start]
747.075: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
749.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 54805K(139444K),
0.0063470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
749.082: [CMS-concurrent-mark-start]
749.095: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
749.095: [CMS-concurrent-preclean-start]
749.096: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
749.096: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 754.175:
[CMS-concurrent-abortable-preclean: 0.718/5.079 secs] [Times:
user=0.72 sys=0.00, real=5.08 secs]
754.175: [GC[YG occupancy: 42423 K (118016 K)]754.175: [Rescan
(parallel) , 0.0051290 secs]754.180: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 55273K(139444K), 0.0052290 secs]
[Times: user=0.05 sys=0.00, real=0.00 secs]
754.181: [CMS-concurrent-sweep-start]
754.182: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
754.182: [CMS-concurrent-reset-start]
754.191: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
756.191: [GC [1 CMS-initial-mark: 12849K(21428K)] 55401K(139444K),
0.0064020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
756.198: [CMS-concurrent-mark-start]
756.212: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
756.212: [CMS-concurrent-preclean-start]
756.213: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
756.213: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 761.217:
[CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
761.218: [GC[YG occupancy: 42871 K (118016 K)]761.218: [Rescan
(parallel) , 0.0052310 secs]761.223: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 55721K(139444K), 0.0053300 secs]
[Times: user=0.06 sys=0.00, real=0.00 secs]
761.223: [CMS-concurrent-sweep-start]
761.225: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
761.225: [CMS-concurrent-reset-start]
761.234: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
763.234: [GC [1 CMS-initial-mark: 12849K(21428K)] 55849K(139444K),
0.0045400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
763.239: [CMS-concurrent-mark-start]
763.253: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
763.253: [CMS-concurrent-preclean-start]
763.253: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
763.253: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 768.348:
[CMS-concurrent-abortable-preclean: 0.690/5.095 secs] [Times:
user=0.69 sys=0.00, real=5.10 secs]
768.349: [GC[YG occupancy: 43320 K (118016 K)]768.349: [Rescan
(parallel) , 0.0045140 secs]768.353: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 56169K(139444K), 0.0046170 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
768.353: [CMS-concurrent-sweep-start]
768.356: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
768.356: [CMS-concurrent-reset-start]
768.365: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
770.365: [GC [1 CMS-initial-mark: 12849K(21428K)] 56298K(139444K),
0.0063950 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
770.372: [CMS-concurrent-mark-start]
770.388: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
770.388: [CMS-concurrent-preclean-start]
770.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
770.388: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 775.400:
[CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
775.401: [GC[YG occupancy: 43768 K (118016 K)]775.401: [Rescan
(parallel) , 0.0043990 secs]775.405: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 56618K(139444K), 0.0045000 secs]
[Times: user=0.06 sys=0.00, real=0.01 secs]
775.405: [CMS-concurrent-sweep-start]
775.407: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
775.407: [CMS-concurrent-reset-start]
775.417: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
777.417: [GC [1 CMS-initial-mark: 12849K(21428K)] 56746K(139444K),
0.0064580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
777.423: [CMS-concurrent-mark-start]
777.438: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
777.438: [CMS-concurrent-preclean-start]
777.439: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
777.439: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 782.448:
[CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
782.448: [GC[YG occupancy: 44321 K (118016 K)]782.448: [Rescan
(parallel) , 0.0054760 secs]782.454: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 57171K(139444K), 0.0055780 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
782.454: [CMS-concurrent-sweep-start]
782.455: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
782.455: [CMS-concurrent-reset-start]
782.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
782.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 57235K(139444K),
0.0066970 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
782.550: [CMS-concurrent-mark-start]
782.567: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
782.567: [CMS-concurrent-preclean-start]
782.568: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
782.568: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 787.574:
[CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
787.574: [GC[YG occupancy: 44746 K (118016 K)]787.574: [Rescan
(parallel) , 0.0049170 secs]787.579: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 57596K(139444K), 0.0050210 secs]
[Times: user=0.06 sys=0.00, real=0.00 secs]
787.579: [CMS-concurrent-sweep-start]
787.581: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
787.581: [CMS-concurrent-reset-start]
787.590: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
789.591: [GC [1 CMS-initial-mark: 12849K(21428K)] 57724K(139444K),
0.0066850 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
789.598: [CMS-concurrent-mark-start]
789.614: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
789.614: [CMS-concurrent-preclean-start]
789.615: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
789.615: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 794.626:
[CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
794.627: [GC[YG occupancy: 45195 K (118016 K)]794.627: [Rescan
(parallel) , 0.0056520 secs]794.632: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 58044K(139444K), 0.0057510 secs]
[Times: user=0.06 sys=0.00, real=0.00 secs]
794.632: [CMS-concurrent-sweep-start]
794.634: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
794.634: [CMS-concurrent-reset-start]
794.643: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
796.643: [GC [1 CMS-initial-mark: 12849K(21428K)] 58172K(139444K),
0.0067410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
796.650: [CMS-concurrent-mark-start]
796.666: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
796.666: [CMS-concurrent-preclean-start]
796.667: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
796.667: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 801.670:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
801.670: [GC[YG occupancy: 45643 K (118016 K)]801.670: [Rescan
(parallel) , 0.0043550 secs]801.675: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 58493K(139444K), 0.0044580 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
801.675: [CMS-concurrent-sweep-start]
801.677: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
801.677: [CMS-concurrent-reset-start]
801.686: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
803.686: [GC [1 CMS-initial-mark: 12849K(21428K)] 58621K(139444K),
0.0067250 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
803.693: [CMS-concurrent-mark-start]
803.708: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
803.708: [CMS-concurrent-preclean-start]
803.709: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
803.709: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 808.717:
[CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
808.717: [GC[YG occupancy: 46091 K (118016 K)]808.717: [Rescan
(parallel) , 0.0034790 secs]808.720: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 58941K(139444K), 0.0035820 secs]
[Times: user=0.04 sys=0.00, real=0.00 secs]
808.721: [CMS-concurrent-sweep-start]
808.722: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
808.722: [CMS-concurrent-reset-start]
808.730: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
810.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 59069K(139444K),
0.0067580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
810.738: [CMS-concurrent-mark-start]
810.755: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
810.755: [CMS-concurrent-preclean-start]
810.755: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
810.755: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 815.823:
[CMS-concurrent-abortable-preclean: 0.715/5.068 secs] [Times:
user=0.72 sys=0.00, real=5.06 secs]
815.824: [GC[YG occupancy: 46580 K (118016 K)]815.824: [Rescan
(parallel) , 0.0048490 secs]815.829: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 59430K(139444K), 0.0049600 secs]
[Times: user=0.06 sys=0.00, real=0.00 secs]
815.829: [CMS-concurrent-sweep-start]
815.831: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
815.831: [CMS-concurrent-reset-start]
815.840: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
817.840: [GC [1 CMS-initial-mark: 12849K(21428K)] 59558K(139444K),
0.0068880 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
817.847: [CMS-concurrent-mark-start]
817.864: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
817.864: [CMS-concurrent-preclean-start]
817.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
817.865: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 822.868:
[CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
user=0.69 sys=0.01, real=5.00 secs]
822.868: [GC[YG occupancy: 47028 K (118016 K)]822.868: [Rescan
(parallel) , 0.0061120 secs]822.874: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 59878K(139444K), 0.0062150 secs]
[Times: user=0.06 sys=0.00, real=0.01 secs]
822.874: [CMS-concurrent-sweep-start]
822.876: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
822.876: [CMS-concurrent-reset-start]
822.885: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
824.885: [GC [1 CMS-initial-mark: 12849K(21428K)] 60006K(139444K),
0.0068610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
824.892: [CMS-concurrent-mark-start]
824.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
824.908: [CMS-concurrent-preclean-start]
824.908: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
824.908: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 829.914:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
829.915: [GC[YG occupancy: 47477 K (118016 K)]829.915: [Rescan
(parallel) , 0.0034890 secs]829.918: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 60327K(139444K), 0.0035930 secs]
[Times: user=0.04 sys=0.00, real=0.00 secs]
829.918: [CMS-concurrent-sweep-start]
829.920: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
829.920: [CMS-concurrent-reset-start]
829.930: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
831.930: [GC [1 CMS-initial-mark: 12849K(21428K)] 60455K(139444K),
0.0069040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
831.937: [CMS-concurrent-mark-start]
831.953: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
831.953: [CMS-concurrent-preclean-start]
831.954: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
831.954: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 836.957:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.71 sys=0.00, real=5.00 secs]
836.957: [GC[YG occupancy: 47925 K (118016 K)]836.957: [Rescan
(parallel) , 0.0060440 secs]836.963: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 60775K(139444K), 0.0061520 secs]
[Times: user=0.06 sys=0.00, real=0.01 secs]
836.964: [CMS-concurrent-sweep-start]
836.965: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
836.965: [CMS-concurrent-reset-start]
836.974: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
838.974: [GC [1 CMS-initial-mark: 12849K(21428K)] 60903K(139444K),
0.0069860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
838.982: [CMS-concurrent-mark-start]
838.997: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
838.998: [CMS-concurrent-preclean-start]
838.998: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
838.998: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 844.091:
[CMS-concurrent-abortable-preclean: 0.718/5.093 secs] [Times:
user=0.72 sys=0.00, real=5.09 secs]
844.092: [GC[YG occupancy: 48731 K (118016 K)]844.092: [Rescan
(parallel) , 0.0052610 secs]844.097: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 61581K(139444K), 0.0053620 secs]
[Times: user=0.06 sys=0.00, real=0.01 secs]
844.097: [CMS-concurrent-sweep-start]
844.099: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
844.099: [CMS-concurrent-reset-start]
844.108: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
846.109: [GC [1 CMS-initial-mark: 12849K(21428K)] 61709K(139444K),
0.0071980 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
846.116: [CMS-concurrent-mark-start]
846.133: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
846.133: [CMS-concurrent-preclean-start]
846.134: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
846.134: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 851.137:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
851.137: [GC[YG occupancy: 49180 K (118016 K)]851.137: [Rescan
(parallel) , 0.0061320 secs]851.143: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 62030K(139444K), 0.0062320 secs]
[Times: user=0.07 sys=0.00, real=0.01 secs]
851.144: [CMS-concurrent-sweep-start]
851.145: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
851.145: [CMS-concurrent-reset-start]
851.154: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
853.154: [GC [1 CMS-initial-mark: 12849K(21428K)] 62158K(139444K),
0.0071610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
853.162: [CMS-concurrent-mark-start]
853.177: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
853.177: [CMS-concurrent-preclean-start]
853.178: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
853.178: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 858.181:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
858.181: [GC[YG occupancy: 49628 K (118016 K)]858.181: [Rescan
(parallel) , 0.0029560 secs]858.184: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 62478K(139444K), 0.0030590 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
858.184: [CMS-concurrent-sweep-start]
858.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
858.186: [CMS-concurrent-reset-start]
858.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
860.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 62606K(139444K),
0.0072070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
860.203: [CMS-concurrent-mark-start]
860.219: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
860.219: [CMS-concurrent-preclean-start]
860.219: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
860.219: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 865.226:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
865.227: [GC[YG occupancy: 50076 K (118016 K)]865.227: [Rescan
(parallel) , 0.0066610 secs]865.233: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 62926K(139444K), 0.0067670 secs]
[Times: user=0.07 sys=0.00, real=0.01 secs]
865.233: [CMS-concurrent-sweep-start]
865.235: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
865.235: [CMS-concurrent-reset-start]
865.244: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
867.244: [GC [1 CMS-initial-mark: 12849K(21428K)] 63054K(139444K),
0.0072490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
867.252: [CMS-concurrent-mark-start]
867.267: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
867.267: [CMS-concurrent-preclean-start]
867.268: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
867.268: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 872.281:
[CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
872.281: [GC[YG occupancy: 50525 K (118016 K)]872.281: [Rescan
(parallel) , 0.0053780 secs]872.286: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 63375K(139444K), 0.0054790 secs]
[Times: user=0.06 sys=0.00, real=0.01 secs]
872.287: [CMS-concurrent-sweep-start]
872.288: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
872.288: [CMS-concurrent-reset-start]
872.296: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
872.572: [GC [1 CMS-initial-mark: 12849K(21428K)] 63439K(139444K),
0.0073060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
872.580: [CMS-concurrent-mark-start]
872.597: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
872.597: [CMS-concurrent-preclean-start]
872.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
872.597: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 877.600:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
877.601: [GC[YG occupancy: 51049 K (118016 K)]877.601: [Rescan
(parallel) , 0.0063070 secs]877.607: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 63899K(139444K), 0.0064090 secs]
[Times: user=0.06 sys=0.00, real=0.01 secs]
877.607: [CMS-concurrent-sweep-start]
877.609: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
877.609: [CMS-concurrent-reset-start]
877.619: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
879.619: [GC [1 CMS-initial-mark: 12849K(21428K)] 64027K(139444K),
0.0073320 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
879.626: [CMS-concurrent-mark-start]
879.643: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
879.643: [CMS-concurrent-preclean-start]
879.644: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
879.644: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 884.657:
[CMS-concurrent-abortable-preclean: 0.708/5.014 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
884.658: [GC[YG occupancy: 51497 K (118016 K)]884.658: [Rescan
(parallel) , 0.0056160 secs]884.663: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 64347K(139444K), 0.0057150 secs]
[Times: user=0.07 sys=0.00, real=0.01 secs]
884.663: [CMS-concurrent-sweep-start]
884.665: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
884.665: [CMS-concurrent-reset-start]
884.674: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
886.674: [GC [1 CMS-initial-mark: 12849K(21428K)] 64475K(139444K),
0.0073420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
886.682: [CMS-concurrent-mark-start]
886.698: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
886.698: [CMS-concurrent-preclean-start]
886.698: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
886.698: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 891.702:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
891.702: [GC[YG occupancy: 51945 K (118016 K)]891.702: [Rescan
(parallel) , 0.0070120 secs]891.709: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 64795K(139444K), 0.0071150 secs]
[Times: user=0.07 sys=0.00, real=0.01 secs]
891.709: [CMS-concurrent-sweep-start]
891.711: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
891.711: [CMS-concurrent-reset-start]
891.721: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
893.721: [GC [1 CMS-initial-mark: 12849K(21428K)] 64923K(139444K),
0.0073880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
893.728: [CMS-concurrent-mark-start]
893.745: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
893.745: [CMS-concurrent-preclean-start]
893.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
893.745: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 898.852:
[CMS-concurrent-abortable-preclean: 0.715/5.107 secs] [Times:
user=0.71 sys=0.00, real=5.10 secs]
898.853: [GC[YG occupancy: 53466 K (118016 K)]898.853: [Rescan
(parallel) , 0.0060600 secs]898.859: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 66315K(139444K), 0.0061640 secs]
[Times: user=0.06 sys=0.00, real=0.01 secs]
898.859: [CMS-concurrent-sweep-start]
898.861: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
898.861: [CMS-concurrent-reset-start]
898.870: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
900.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 66444K(139444K),
0.0074670 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
900.878: [CMS-concurrent-mark-start]
900.895: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
900.895: [CMS-concurrent-preclean-start]
900.896: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
900.896: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 905.969:
[CMS-concurrent-abortable-preclean: 0.716/5.074 secs] [Times:
user=0.72 sys=0.01, real=5.07 secs]
905.969: [GC[YG occupancy: 54157 K (118016 K)]905.970: [Rescan
(parallel) , 0.0068200 secs]905.976: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 67007K(139444K), 0.0069250 secs]
[Times: user=0.07 sys=0.00, real=0.01 secs]
905.977: [CMS-concurrent-sweep-start]
905.978: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
905.978: [CMS-concurrent-reset-start]
905.986: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
907.986: [GC [1 CMS-initial-mark: 12849K(21428K)] 67135K(139444K),
0.0076010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
907.994: [CMS-concurrent-mark-start]
908.009: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
908.009: [CMS-concurrent-preclean-start]
908.010: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
908.010: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 913.013:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.69 sys=0.01, real=5.00 secs]
913.013: [GC[YG occupancy: 54606 K (118016 K)]913.013: [Rescan
(parallel) , 0.0053650 secs]913.018: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 67455K(139444K), 0.0054650 secs]
[Times: user=0.06 sys=0.00, real=0.01 secs]
913.019: [CMS-concurrent-sweep-start]
913.021: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
913.021: [CMS-concurrent-reset-start]
913.030: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
915.030: [GC [1 CMS-initial-mark: 12849K(21428K)] 67583K(139444K),
0.0076410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
915.038: [CMS-concurrent-mark-start]
915.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
915.055: [CMS-concurrent-preclean-start]
915.056: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
915.056: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 920.058:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
920.058: [GC[YG occupancy: 55054 K (118016 K)]920.058: [Rescan
(parallel) , 0.0058380 secs]920.064: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 67904K(139444K), 0.0059420 secs]
[Times: user=0.07 sys=0.00, real=0.01 secs]
920.064: [CMS-concurrent-sweep-start]
920.066: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
920.066: [CMS-concurrent-reset-start]
920.075: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.01, real=0.01 secs]
922.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 68032K(139444K),
0.0075820 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
922.083: [CMS-concurrent-mark-start]
922.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
922.098: [CMS-concurrent-preclean-start]
922.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
922.099: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 927.102:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
927.102: [GC[YG occupancy: 55502 K (118016 K)]927.102: [Rescan
(parallel) , 0.0059190 secs]927.108: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 68352K(139444K), 0.0060220 secs]
[Times: user=0.06 sys=0.01, real=0.01 secs]
927.108: [CMS-concurrent-sweep-start]
927.110: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
927.110: [CMS-concurrent-reset-start]
927.120: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
929.120: [GC [1 CMS-initial-mark: 12849K(21428K)] 68480K(139444K),
0.0077620 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
929.128: [CMS-concurrent-mark-start]
929.145: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
929.145: [CMS-concurrent-preclean-start]
929.145: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
929.145: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 934.237:
[CMS-concurrent-abortable-preclean: 0.717/5.092 secs] [Times:
user=0.72 sys=0.00, real=5.09 secs]
934.238: [GC[YG occupancy: 55991 K (118016 K)]934.238: [Rescan
(parallel) , 0.0042660 secs]934.242: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 68841K(139444K), 0.0043660 secs]
[Times: user=0.05 sys=0.00, real=0.00 secs]
934.242: [CMS-concurrent-sweep-start]
934.244: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
934.244: [CMS-concurrent-reset-start]
934.252: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
936.253: [GC [1 CMS-initial-mark: 12849K(21428K)] 68969K(139444K),
0.0077340 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
936.261: [CMS-concurrent-mark-start]
936.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
936.277: [CMS-concurrent-preclean-start]
936.278: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
936.278: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 941.284:
[CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
941.284: [GC[YG occupancy: 56439 K (118016 K)]941.284: [Rescan
(parallel) , 0.0059460 secs]941.290: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 69289K(139444K), 0.0060470 secs]
[Times: user=0.08 sys=0.00, real=0.00 secs]
941.290: [CMS-concurrent-sweep-start]
941.293: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
941.293: [CMS-concurrent-reset-start]
941.302: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
943.302: [GC [1 CMS-initial-mark: 12849K(21428K)] 69417K(139444K),
0.0077760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
943.310: [CMS-concurrent-mark-start]
943.326: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
943.326: [CMS-concurrent-preclean-start]
943.327: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
943.327: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 948.340:
[CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
948.340: [GC[YG occupancy: 56888 K (118016 K)]948.340: [Rescan
(parallel) , 0.0047760 secs]948.345: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 69738K(139444K), 0.0048770 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
948.345: [CMS-concurrent-sweep-start]
948.347: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
948.347: [CMS-concurrent-reset-start]
948.356: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
950.356: [GC [1 CMS-initial-mark: 12849K(21428K)] 69866K(139444K),
0.0077750 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
950.364: [CMS-concurrent-mark-start]
950.380: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
950.380: [CMS-concurrent-preclean-start]
950.380: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
950.380: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 955.384:
[CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
955.384: [GC[YG occupancy: 57336 K (118016 K)]955.384: [Rescan
(parallel) , 0.0072540 secs]955.392: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 70186K(139444K), 0.0073540 secs]
[Times: user=0.08 sys=0.00, real=0.00 secs]
955.392: [CMS-concurrent-sweep-start]
955.394: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
955.394: [CMS-concurrent-reset-start]
955.403: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
957.403: [GC [1 CMS-initial-mark: 12849K(21428K)] 70314K(139444K),
0.0078120 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
957.411: [CMS-concurrent-mark-start]
957.427: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
957.427: [CMS-concurrent-preclean-start]
957.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
957.427: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 962.437:
[CMS-concurrent-abortable-preclean: 0.704/5.010 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
962.437: [GC[YG occupancy: 57889 K (118016 K)]962.437: [Rescan
(parallel) , 0.0076140 secs]962.445: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 70739K(139444K), 0.0077160 secs]
[Times: user=0.08 sys=0.00, real=0.01 secs]
962.445: [CMS-concurrent-sweep-start]
962.446: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
962.446: [CMS-concurrent-reset-start]
962.456: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
962.599: [GC [1 CMS-initial-mark: 12849K(21428K)] 70827K(139444K),
0.0081180 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
962.608: [CMS-concurrent-mark-start]
962.626: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
962.626: [CMS-concurrent-preclean-start]
962.626: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
962.626: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 967.632:
[CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
967.632: [GC[YG occupancy: 58338 K (118016 K)]967.632: [Rescan
(parallel) , 0.0061170 secs]967.638: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 71188K(139444K), 0.0062190 secs]
[Times: user=0.08 sys=0.00, real=0.01 secs]
967.638: [CMS-concurrent-sweep-start]
967.640: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
967.640: [CMS-concurrent-reset-start]
967.648: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
969.648: [GC [1 CMS-initial-mark: 12849K(21428K)] 71316K(139444K),
0.0081110 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
969.656: [CMS-concurrent-mark-start]
969.674: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
969.674: [CMS-concurrent-preclean-start]
969.674: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
969.674: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 974.677:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
974.677: [GC[YG occupancy: 58786 K (118016 K)]974.677: [Rescan
(parallel) , 0.0070810 secs]974.685: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 71636K(139444K), 0.0072050 secs]
[Times: user=0.08 sys=0.00, real=0.01 secs]
974.685: [CMS-concurrent-sweep-start]
974.686: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
974.686: [CMS-concurrent-reset-start]
974.695: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
976.696: [GC [1 CMS-initial-mark: 12849K(21428K)] 71764K(139444K),
0.0080650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
976.704: [CMS-concurrent-mark-start]
976.719: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
976.719: [CMS-concurrent-preclean-start]
976.719: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
976.719: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 981.727:
[CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
user=0.69 sys=0.01, real=5.01 secs]
981.727: [GC[YG occupancy: 59235 K (118016 K)]981.727: [Rescan
(parallel) , 0.0066570 secs]981.734: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 72085K(139444K), 0.0067620 secs]
[Times: user=0.08 sys=0.00, real=0.01 secs]
981.734: [CMS-concurrent-sweep-start]
981.736: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
981.736: [CMS-concurrent-reset-start]
981.745: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
983.745: [GC [1 CMS-initial-mark: 12849K(21428K)] 72213K(139444K),
0.0081400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
983.753: [CMS-concurrent-mark-start]
983.769: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
983.769: [CMS-concurrent-preclean-start]
983.769: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
983.769: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 988.840:
[CMS-concurrent-abortable-preclean: 0.716/5.071 secs] [Times:
user=0.71 sys=0.00, real=5.07 secs]
988.840: [GC[YG occupancy: 59683 K (118016 K)]988.840: [Rescan
(parallel) , 0.0076020 secs]988.848: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 72533K(139444K), 0.0077100 secs]
[Times: user=0.08 sys=0.00, real=0.01 secs]
988.848: [CMS-concurrent-sweep-start]
988.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
988.850: [CMS-concurrent-reset-start]
988.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
990.858: [GC [1 CMS-initial-mark: 12849K(21428K)] 72661K(139444K),
0.0081810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
990.867: [CMS-concurrent-mark-start]
990.884: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
990.884: [CMS-concurrent-preclean-start]
990.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
990.885: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 995.999:
[CMS-concurrent-abortable-preclean: 0.721/5.114 secs] [Times:
user=0.73 sys=0.00, real=5.11 secs]
995.999: [GC[YG occupancy: 60307 K (118016 K)]995.999: [Rescan
(parallel) , 0.0058190 secs]996.005: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 73156K(139444K), 0.0059260 secs]
[Times: user=0.06 sys=0.00, real=0.01 secs]
996.005: [CMS-concurrent-sweep-start]
996.007: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
996.007: [CMS-concurrent-reset-start]
996.016: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
998.016: [GC [1 CMS-initial-mark: 12849K(21428K)] 73285K(139444K),
0.0052760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
998.022: [CMS-concurrent-mark-start]
998.038: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
998.038: [CMS-concurrent-preclean-start]
998.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
998.039: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1003.048:
[CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1003.048: [GC[YG occupancy: 60755 K (118016 K)]1003.048: [Rescan
(parallel) , 0.0068040 secs]1003.055: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 73605K(139444K), 0.0069060 secs]
[Times: user=0.08 sys=0.00, real=0.01 secs]
1003.055: [CMS-concurrent-sweep-start]
1003.057: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1003.057: [CMS-concurrent-reset-start]
1003.066: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1005.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 73733K(139444K),
0.0082200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1005.075: [CMS-concurrent-mark-start]
1005.090: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1005.090: [CMS-concurrent-preclean-start]
1005.090: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1005.090: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1010.094:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
1010.094: [GC[YG occupancy: 61203 K (118016 K)]1010.094: [Rescan
(parallel) , 0.0066010 secs]1010.101: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 74053K(139444K), 0.0067120 secs]
[Times: user=0.08 sys=0.00, real=0.00 secs]
1010.101: [CMS-concurrent-sweep-start]
1010.103: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1010.103: [CMS-concurrent-reset-start]
1010.112: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1012.113: [GC [1 CMS-initial-mark: 12849K(21428K)] 74181K(139444K),
0.0083460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1012.121: [CMS-concurrent-mark-start]
1012.137: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1012.137: [CMS-concurrent-preclean-start]
1012.138: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1012.138: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1017.144:
[CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
1017.144: [GC[YG occupancy: 61651 K (118016 K)]1017.144: [Rescan
(parallel) , 0.0058810 secs]1017.150: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 74501K(139444K), 0.0059830 secs]
[Times: user=0.06 sys=0.00, real=0.00 secs]
1017.151: [CMS-concurrent-sweep-start]
1017.153: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1017.153: [CMS-concurrent-reset-start]
1017.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1019.162: [GC [1 CMS-initial-mark: 12849K(21428K)] 74629K(139444K),
0.0083310 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1019.171: [CMS-concurrent-mark-start]
1019.187: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1019.187: [CMS-concurrent-preclean-start]
1019.187: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1019.187: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1024.261:
[CMS-concurrent-abortable-preclean: 0.717/5.074 secs] [Times:
user=0.72 sys=0.00, real=5.07 secs]
1024.261: [GC[YG occupancy: 62351 K (118016 K)]1024.262: [Rescan
(parallel) , 0.0069720 secs]1024.269: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 75200K(139444K), 0.0070750 secs]
[Times: user=0.08 sys=0.01, real=0.01 secs]
1024.269: [CMS-concurrent-sweep-start]
1024.270: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1024.270: [CMS-concurrent-reset-start]
1024.278: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1026.279: [GC [1 CMS-initial-mark: 12849K(21428K)] 75329K(139444K),
0.0086360 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1026.288: [CMS-concurrent-mark-start]
1026.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1026.305: [CMS-concurrent-preclean-start]
1026.305: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1026.305: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1031.308:
[CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1031.308: [GC[YG occupancy: 62799 K (118016 K)]1031.308: [Rescan
(parallel) , 0.0069330 secs]1031.315: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 75649K(139444K), 0.0070380 secs]
[Times: user=0.09 sys=0.00, real=0.01 secs]
1031.315: [CMS-concurrent-sweep-start]
1031.316: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1031.316: [CMS-concurrent-reset-start]
1031.326: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1033.326: [GC [1 CMS-initial-mark: 12849K(21428K)] 75777K(139444K),
0.0085850 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1033.335: [CMS-concurrent-mark-start]
1033.350: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1033.350: [CMS-concurrent-preclean-start]
1033.351: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1033.351: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1038.357:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.69 sys=0.01, real=5.01 secs]
1038.358: [GC[YG occupancy: 63247 K (118016 K)]1038.358: [Rescan
(parallel) , 0.0071860 secs]1038.365: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 76097K(139444K), 0.0072900 secs]
[Times: user=0.08 sys=0.00, real=0.01 secs]
1038.365: [CMS-concurrent-sweep-start]
1038.367: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1038.367: [CMS-concurrent-reset-start]
1038.376: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1040.376: [GC [1 CMS-initial-mark: 12849K(21428K)] 76225K(139444K),
0.0085910 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1040.385: [CMS-concurrent-mark-start]
1040.401: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1040.401: [CMS-concurrent-preclean-start]
1040.401: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1040.401: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1045.411:
[CMS-concurrent-abortable-preclean: 0.705/5.010 secs] [Times:
user=0.69 sys=0.01, real=5.01 secs]
1045.412: [GC[YG occupancy: 63695 K (118016 K)]1045.412: [Rescan
(parallel) , 0.0082050 secs]1045.420: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 76545K(139444K), 0.0083110 secs]
[Times: user=0.09 sys=0.00, real=0.01 secs]
1045.420: [CMS-concurrent-sweep-start]
1045.421: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1045.421: [CMS-concurrent-reset-start]
1045.430: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1047.430: [GC [1 CMS-initial-mark: 12849K(21428K)] 76673K(139444K),
0.0086110 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1047.439: [CMS-concurrent-mark-start]
1047.456: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1047.456: [CMS-concurrent-preclean-start]
1047.456: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1047.456: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1052.462:
[CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
1052.462: [GC[YG occupancy: 64144 K (118016 K)]1052.462: [Rescan
(parallel) , 0.0087770 secs]1052.471: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 76994K(139444K), 0.0088770 secs]
[Times: user=0.09 sys=0.00, real=0.01 secs]
1052.471: [CMS-concurrent-sweep-start]
1052.472: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1052.472: [CMS-concurrent-reset-start]
1052.481: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1052.628: [GC [1 CMS-initial-mark: 12849K(21428K)] 77058K(139444K),
0.0086170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1052.637: [CMS-concurrent-mark-start]
1052.655: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1052.655: [CMS-concurrent-preclean-start]
1052.656: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1052.656: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1057.658:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1057.658: [GC[YG occupancy: 64569 K (118016 K)]1057.658: [Rescan
(parallel) , 0.0072850 secs]1057.665: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 77418K(139444K), 0.0073880 secs]
[Times: user=0.09 sys=0.00, real=0.01 secs]
1057.666: [CMS-concurrent-sweep-start]
1057.668: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1057.668: [CMS-concurrent-reset-start]
1057.677: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1059.677: [GC [1 CMS-initial-mark: 12849K(21428K)] 77547K(139444K),
0.0086820 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1059.686: [CMS-concurrent-mark-start]
1059.703: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1059.703: [CMS-concurrent-preclean-start]
1059.703: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1059.703: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1064.712:
[CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1064.712: [GC[YG occupancy: 65017 K (118016 K)]1064.712: [Rescan
(parallel) , 0.0071630 secs]1064.720: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 77867K(139444K), 0.0072700 secs]
[Times: user=0.09 sys=0.00, real=0.01 secs]
1064.720: [CMS-concurrent-sweep-start]
1064.722: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1064.722: [CMS-concurrent-reset-start]
1064.731: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1066.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 77995K(139444K),
0.0087640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1066.740: [CMS-concurrent-mark-start]
1066.757: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1066.757: [CMS-concurrent-preclean-start]
1066.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1066.757: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1071.821:
[CMS-concurrent-abortable-preclean: 0.714/5.064 secs] [Times:
user=0.71 sys=0.00, real=5.06 secs]
1071.822: [GC[YG occupancy: 65465 K (118016 K)]1071.822: [Rescan
(parallel) , 0.0056280 secs]1071.827: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 78315K(139444K), 0.0057430 secs]
[Times: user=0.07 sys=0.00, real=0.01 secs]
1071.828: [CMS-concurrent-sweep-start]
1071.830: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1071.830: [CMS-concurrent-reset-start]
1071.839: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1073.839: [GC [1 CMS-initial-mark: 12849K(21428K)] 78443K(139444K),
0.0087570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1073.848: [CMS-concurrent-mark-start]
1073.865: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1073.865: [CMS-concurrent-preclean-start]
1073.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1073.865: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1078.868:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1078.868: [GC[YG occupancy: 65914 K (118016 K)]1078.868: [Rescan
(parallel) , 0.0055280 secs]1078.873: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 78763K(139444K), 0.0056320 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
1078.874: [CMS-concurrent-sweep-start]
1078.875: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1078.875: [CMS-concurrent-reset-start]
1078.884: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1080.884: [GC [1 CMS-initial-mark: 12849K(21428K)] 78892K(139444K),
0.0088520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1080.893: [CMS-concurrent-mark-start]
1080.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1080.909: [CMS-concurrent-preclean-start]
1080.909: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1080.909: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1086.021:
[CMS-concurrent-abortable-preclean: 0.714/5.112 secs] [Times:
user=0.72 sys=0.00, real=5.11 secs]
1086.021: [GC[YG occupancy: 66531 K (118016 K)]1086.022: [Rescan
(parallel) , 0.0075330 secs]1086.029: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 79381K(139444K), 0.0076440 secs]
[Times: user=0.09 sys=0.01, real=0.01 secs]
1086.029: [CMS-concurrent-sweep-start]
1086.031: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1086.031: [CMS-concurrent-reset-start]
1086.041: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1088.041: [GC [1 CMS-initial-mark: 12849K(21428K)] 79509K(139444K),
0.0091350 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1088.050: [CMS-concurrent-mark-start]
1088.066: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1088.067: [CMS-concurrent-preclean-start]
1088.067: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1088.067: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1093.070:
[CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1093.071: [GC[YG occupancy: 66980 K (118016 K)]1093.071: [Rescan
(parallel) , 0.0051870 secs]1093.076: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 79830K(139444K), 0.0052930 secs]
[Times: user=0.07 sys=0.00, real=0.01 secs]
1093.076: [CMS-concurrent-sweep-start]
1093.078: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1093.078: [CMS-concurrent-reset-start]
1093.087: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1095.088: [GC [1 CMS-initial-mark: 12849K(21428K)] 79958K(139444K),
0.0091350 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1095.097: [CMS-concurrent-mark-start]
1095.114: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1095.114: [CMS-concurrent-preclean-start]
1095.115: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1095.115: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1100.121:
[CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
user=0.69 sys=0.01, real=5.00 secs]
1100.121: [GC[YG occupancy: 67428 K (118016 K)]1100.122: [Rescan
(parallel) , 0.0068510 secs]1100.128: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 80278K(139444K), 0.0069510 secs]
[Times: user=0.09 sys=0.00, real=0.01 secs]
1100.129: [CMS-concurrent-sweep-start]
1100.130: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1100.130: [CMS-concurrent-reset-start]
1100.138: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1102.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 80406K(139444K),
0.0090760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1102.148: [CMS-concurrent-mark-start]
1102.165: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1102.165: [CMS-concurrent-preclean-start]
1102.165: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1102.165: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1107.168:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1107.168: [GC[YG occupancy: 67876 K (118016 K)]1107.168: [Rescan
(parallel) , 0.0076420 secs]1107.176: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 80726K(139444K), 0.0077500 secs]
[Times: user=0.09 sys=0.00, real=0.01 secs]
1107.176: [CMS-concurrent-sweep-start]
1107.178: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1107.178: [CMS-concurrent-reset-start]
1107.187: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1109.188: [GC [1 CMS-initial-mark: 12849K(21428K)] 80854K(139444K),
0.0091510 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1109.197: [CMS-concurrent-mark-start]
1109.214: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1109.214: [CMS-concurrent-preclean-start]
1109.214: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1109.214: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1114.290:
[CMS-concurrent-abortable-preclean: 0.711/5.076 secs] [Times:
user=0.72 sys=0.00, real=5.07 secs]
1114.290: [GC[YG occupancy: 68473 K (118016 K)]1114.290: [Rescan
(parallel) , 0.0084730 secs]1114.299: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 81322K(139444K), 0.0085810 secs]
[Times: user=0.09 sys=0.00, real=0.01 secs]
1114.299: [CMS-concurrent-sweep-start]
1114.301: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1114.301: [CMS-concurrent-reset-start]
1114.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1115.803: [GC [1 CMS-initial-mark: 12849K(21428K)] 81451K(139444K),
0.0106050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1115.814: [CMS-concurrent-mark-start]
1115.830: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1115.830: [CMS-concurrent-preclean-start]
1115.831: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1115.831: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1120.839:
[CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1120.839: [GC[YG occupancy: 68921 K (118016 K)]1120.839: [Rescan
(parallel) , 0.0088800 secs]1120.848: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 81771K(139444K), 0.0089910 secs]
[Times: user=0.09 sys=0.00, real=0.01 secs]
1120.848: [CMS-concurrent-sweep-start]
1120.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1120.850: [CMS-concurrent-reset-start]
1120.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1122.859: [GC [1 CMS-initial-mark: 12849K(21428K)] 81899K(139444K),
0.0092280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1122.868: [CMS-concurrent-mark-start]
1122.885: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1122.885: [CMS-concurrent-preclean-start]
1122.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1122.885: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1127.888:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.69 sys=0.01, real=5.00 secs]
1127.888: [GC[YG occupancy: 69369 K (118016 K)]1127.888: [Rescan
(parallel) , 0.0087740 secs]1127.897: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 82219K(139444K), 0.0088850 secs]
[Times: user=0.10 sys=0.00, real=0.01 secs]
1127.897: [CMS-concurrent-sweep-start]
1127.898: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1127.898: [CMS-concurrent-reset-start]
1127.906: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1129.907: [GC [1 CMS-initial-mark: 12849K(21428K)] 82347K(139444K),
0.0092280 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1129.916: [CMS-concurrent-mark-start]
1129.933: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1129.933: [CMS-concurrent-preclean-start]
1129.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1129.934: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1134.938:
[CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1134.938: [GC[YG occupancy: 69818 K (118016 K)]1134.939: [Rescan
(parallel) , 0.0078530 secs]1134.946: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 82667K(139444K), 0.0079630 secs]
[Times: user=0.10 sys=0.00, real=0.01 secs]
1134.947: [CMS-concurrent-sweep-start]
1134.948: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1134.948: [CMS-concurrent-reset-start]
1134.956: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1136.957: [GC [1 CMS-initial-mark: 12849K(21428K)] 82795K(139444K),
0.0092760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1136.966: [CMS-concurrent-mark-start]
1136.983: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.01 secs]
1136.983: [CMS-concurrent-preclean-start]
1136.984: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.01 secs]
1136.984: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1141.991:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1141.991: [GC[YG occupancy: 70266 K (118016 K)]1141.991: [Rescan
(parallel) , 0.0090620 secs]1142.000: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 83116K(139444K), 0.0091700 secs]
[Times: user=0.10 sys=0.00, real=0.01 secs]
1142.000: [CMS-concurrent-sweep-start]
1142.002: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1142.002: [CMS-concurrent-reset-start]
1142.011: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1142.657: [GC [1 CMS-initial-mark: 12849K(21428K)] 83390K(139444K),
0.0094330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1142.667: [CMS-concurrent-mark-start]
1142.685: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1142.685: [CMS-concurrent-preclean-start]
1142.686: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1142.686: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1147.688:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1147.688: [GC[YG occupancy: 70901 K (118016 K)]1147.688: [Rescan
(parallel) , 0.0081170 secs]1147.696: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 83751K(139444K), 0.0082390 secs]
[Times: user=0.10 sys=0.00, real=0.01 secs]
1147.697: [CMS-concurrent-sweep-start]
1147.698: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1147.698: [CMS-concurrent-reset-start]
1147.706: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1149.706: [GC [1 CMS-initial-mark: 12849K(21428K)] 83879K(139444K),
0.0095560 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1149.716: [CMS-concurrent-mark-start]
1149.734: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1149.734: [CMS-concurrent-preclean-start]
1149.734: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1149.734: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1154.741:
[CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1154.741: [GC[YG occupancy: 71349 K (118016 K)]1154.741: [Rescan
(parallel) , 0.0090720 secs]1154.750: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 84199K(139444K), 0.0091780 secs]
[Times: user=0.10 sys=0.01, real=0.01 secs]
1154.750: [CMS-concurrent-sweep-start]
1154.752: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1154.752: [CMS-concurrent-reset-start]
1154.762: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1155.021: [GC [1 CMS-initial-mark: 12849K(21428K)] 84199K(139444K),
0.0094030 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1155.031: [CMS-concurrent-mark-start]
1155.047: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1155.047: [CMS-concurrent-preclean-start]
1155.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1155.047: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1160.056:
[CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1160.056: [GC[YG occupancy: 71669 K (118016 K)]1160.056: [Rescan
(parallel) , 0.0056520 secs]1160.062: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 84519K(139444K), 0.0057790 secs]
[Times: user=0.07 sys=0.00, real=0.00 secs]
1160.062: [CMS-concurrent-sweep-start]
1160.064: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1160.064: [CMS-concurrent-reset-start]
1160.073: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1162.074: [GC [1 CMS-initial-mark: 12849K(21428K)] 84647K(139444K),
0.0095040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1162.083: [CMS-concurrent-mark-start]
1162.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1162.098: [CMS-concurrent-preclean-start]
1162.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1162.099: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1167.102:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1167.102: [GC[YG occupancy: 72118 K (118016 K)]1167.102: [Rescan
(parallel) , 0.0072180 secs]1167.110: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 84968K(139444K), 0.0073300 secs]
[Times: user=0.08 sys=0.00, real=0.01 secs]
1167.110: [CMS-concurrent-sweep-start]
1167.112: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1167.112: [CMS-concurrent-reset-start]
1167.121: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1169.121: [GC [1 CMS-initial-mark: 12849K(21428K)] 85096K(139444K),
0.0096940 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1169.131: [CMS-concurrent-mark-start]
1169.147: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1169.147: [CMS-concurrent-preclean-start]
1169.147: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1169.147: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1174.197:
[CMS-concurrent-abortable-preclean: 0.720/5.050 secs] [Times:
user=0.72 sys=0.01, real=5.05 secs]
1174.198: [GC[YG occupancy: 72607 K (118016 K)]1174.198: [Rescan
(parallel) , 0.0064910 secs]1174.204: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 85456K(139444K), 0.0065940 secs]
[Times: user=0.06 sys=0.01, real=0.01 secs]
1174.204: [CMS-concurrent-sweep-start]
1174.206: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1174.206: [CMS-concurrent-reset-start]
1174.215: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1176.215: [GC [1 CMS-initial-mark: 12849K(21428K)] 85585K(139444K),
0.0095940 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1176.225: [CMS-concurrent-mark-start]
1176.240: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1176.240: [CMS-concurrent-preclean-start]
1176.241: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1176.241: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1181.244:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1181.244: [GC[YG occupancy: 73055 K (118016 K)]1181.244: [Rescan
(parallel) , 0.0093030 secs]1181.254: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 85905K(139444K), 0.0094040 secs]
[Times: user=0.09 sys=0.01, real=0.01 secs]
1181.254: [CMS-concurrent-sweep-start]
1181.256: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1181.256: [CMS-concurrent-reset-start]
1181.265: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1183.266: [GC [1 CMS-initial-mark: 12849K(21428K)] 86033K(139444K),
0.0096490 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1183.275: [CMS-concurrent-mark-start]
1183.293: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
sys=0.00, real=0.02 secs]
1183.293: [CMS-concurrent-preclean-start]
1183.294: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1183.294: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1188.301:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1188.301: [GC[YG occupancy: 73503 K (118016 K)]1188.301: [Rescan
(parallel) , 0.0092610 secs]1188.310: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 86353K(139444K), 0.0093750 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1188.310: [CMS-concurrent-sweep-start]
1188.312: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1188.312: [CMS-concurrent-reset-start]
1188.320: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1190.321: [GC [1 CMS-initial-mark: 12849K(21428K)] 86481K(139444K),
0.0097510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1190.331: [CMS-concurrent-mark-start]
1190.347: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1190.347: [CMS-concurrent-preclean-start]
1190.347: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1190.347: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1195.359:
[CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1195.359: [GC[YG occupancy: 73952 K (118016 K)]1195.359: [Rescan
(parallel) , 0.0093210 secs]1195.368: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 86801K(139444K), 0.0094330 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1195.369: [CMS-concurrent-sweep-start]
1195.370: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1195.370: [CMS-concurrent-reset-start]
1195.378: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1196.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 88001K(139444K),
0.0099870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1196.553: [CMS-concurrent-mark-start]
1196.570: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1196.570: [CMS-concurrent-preclean-start]
1196.570: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1196.570: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1201.574:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1201.574: [GC[YG occupancy: 75472 K (118016 K)]1201.574: [Rescan
(parallel) , 0.0096480 secs]1201.584: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 88322K(139444K), 0.0097500 secs]
[Times: user=0.10 sys=0.00, real=0.01 secs]
1201.584: [CMS-concurrent-sweep-start]
1201.586: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1201.586: [CMS-concurrent-reset-start]
1201.595: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1202.679: [GC [1 CMS-initial-mark: 12849K(21428K)] 88491K(139444K),
0.0099400 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1202.690: [CMS-concurrent-mark-start]
1202.708: [CMS-concurrent-mark: 0.016/0.019 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1202.708: [CMS-concurrent-preclean-start]
1202.709: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1202.709: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1207.718:
[CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1207.718: [GC[YG occupancy: 76109 K (118016 K)]1207.718: [Rescan
(parallel) , 0.0096360 secs]1207.727: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 88959K(139444K), 0.0097380 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1207.728: [CMS-concurrent-sweep-start]
1207.729: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1207.729: [CMS-concurrent-reset-start]
1207.737: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1209.738: [GC [1 CMS-initial-mark: 12849K(21428K)] 89087K(139444K),
0.0099440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1209.748: [CMS-concurrent-mark-start]
1209.765: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1209.765: [CMS-concurrent-preclean-start]
1209.765: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1209.765: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1214.797:
[CMS-concurrent-abortable-preclean: 0.716/5.031 secs] [Times:
user=0.72 sys=0.00, real=5.03 secs]
1214.797: [GC[YG occupancy: 76557 K (118016 K)]1214.797: [Rescan
(parallel) , 0.0096280 secs]1214.807: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 89407K(139444K), 0.0097320 secs]
[Times: user=0.12 sys=0.00, real=0.01 secs]
1214.807: [CMS-concurrent-sweep-start]
1214.808: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1214.808: [CMS-concurrent-reset-start]
1214.816: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1216.817: [GC [1 CMS-initial-mark: 12849K(21428K)] 89535K(139444K),
0.0099640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1216.827: [CMS-concurrent-mark-start]
1216.844: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1216.844: [CMS-concurrent-preclean-start]
1216.844: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1216.844: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1221.847:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1221.847: [GC[YG occupancy: 77005 K (118016 K)]1221.847: [Rescan
(parallel) , 0.0061810 secs]1221.854: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 89855K(139444K), 0.0062950 secs]
[Times: user=0.07 sys=0.00, real=0.01 secs]
1221.854: [CMS-concurrent-sweep-start]
1221.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1221.855: [CMS-concurrent-reset-start]
1221.864: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1223.865: [GC [1 CMS-initial-mark: 12849K(21428K)] 89983K(139444K),
0.0100430 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1223.875: [CMS-concurrent-mark-start]
1223.890: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1223.890: [CMS-concurrent-preclean-start]
1223.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1223.891: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1228.899:
[CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1228.899: [GC[YG occupancy: 77454 K (118016 K)]1228.899: [Rescan
(parallel) , 0.0095850 secs]1228.909: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 90304K(139444K), 0.0096960 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1228.909: [CMS-concurrent-sweep-start]
1228.911: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1228.911: [CMS-concurrent-reset-start]
1228.919: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1230.919: [GC [1 CMS-initial-mark: 12849K(21428K)] 90432K(139444K),
0.0101360 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1230.930: [CMS-concurrent-mark-start]
1230.946: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1230.946: [CMS-concurrent-preclean-start]
1230.947: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1230.947: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1235.952:
[CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1235.952: [GC[YG occupancy: 77943 K (118016 K)]1235.952: [Rescan
(parallel) , 0.0084420 secs]1235.961: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 90793K(139444K), 0.0085450 secs]
[Times: user=0.10 sys=0.00, real=0.01 secs]
1235.961: [CMS-concurrent-sweep-start]
1235.963: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1235.963: [CMS-concurrent-reset-start]
1235.972: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1237.973: [GC [1 CMS-initial-mark: 12849K(21428K)] 90921K(139444K),
0.0101280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1237.983: [CMS-concurrent-mark-start]
1237.998: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1237.998: [CMS-concurrent-preclean-start]
1237.999: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1237.999: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1243.008:
[CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1243.008: [GC[YG occupancy: 78391 K (118016 K)]1243.008: [Rescan
(parallel) , 0.0090510 secs]1243.017: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 91241K(139444K), 0.0091560 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1243.017: [CMS-concurrent-sweep-start]
1243.019: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1243.019: [CMS-concurrent-reset-start]
1243.027: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1245.027: [GC [1 CMS-initial-mark: 12849K(21428K)] 91369K(139444K),
0.0101080 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1245.038: [CMS-concurrent-mark-start]
1245.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1245.055: [CMS-concurrent-preclean-start]
1245.055: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1245.055: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1250.058:
[CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1250.058: [GC[YG occupancy: 78839 K (118016 K)]1250.058: [Rescan
(parallel) , 0.0096920 secs]1250.068: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 91689K(139444K), 0.0098040 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1250.068: [CMS-concurrent-sweep-start]
1250.070: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1250.070: [CMS-concurrent-reset-start]
1250.078: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1252.078: [GC [1 CMS-initial-mark: 12849K(21428K)] 91817K(139444K),
0.0102560 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1252.089: [CMS-concurrent-mark-start]
1252.105: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1252.105: [CMS-concurrent-preclean-start]
1252.106: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1252.106: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1257.113:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1257.113: [GC[YG occupancy: 79288 K (118016 K)]1257.113: [Rescan
(parallel) , 0.0089920 secs]1257.122: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 92137K(139444K), 0.0090960 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1257.122: [CMS-concurrent-sweep-start]
1257.124: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1257.124: [CMS-concurrent-reset-start]
1257.133: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1259.134: [GC [1 CMS-initial-mark: 12849K(21428K)] 92266K(139444K),
0.0101720 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1259.144: [CMS-concurrent-mark-start]
1259.159: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
1259.159: [CMS-concurrent-preclean-start]
1259.159: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1259.159: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1264.229:
[CMS-concurrent-abortable-preclean: 0.716/5.070 secs] [Times:
user=0.72 sys=0.01, real=5.07 secs]
1264.229: [GC[YG occupancy: 79881 K (118016 K)]1264.229: [Rescan
(parallel) , 0.0101320 secs]1264.240: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 92731K(139444K), 0.0102440 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1264.240: [CMS-concurrent-sweep-start]
1264.241: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1264.241: [CMS-concurrent-reset-start]
1264.250: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1266.250: [GC [1 CMS-initial-mark: 12849K(21428K)] 92859K(139444K),
0.0105180 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1266.261: [CMS-concurrent-mark-start]
1266.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1266.277: [CMS-concurrent-preclean-start]
1266.277: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1266.277: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1271.285:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1271.285: [GC[YG occupancy: 80330 K (118016 K)]1271.285: [Rescan
(parallel) , 0.0094600 secs]1271.295: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 93180K(139444K), 0.0095600 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1271.295: [CMS-concurrent-sweep-start]
1271.297: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1271.297: [CMS-concurrent-reset-start]
1271.306: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1273.306: [GC [1 CMS-initial-mark: 12849K(21428K)] 93308K(139444K),
0.0104100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1273.317: [CMS-concurrent-mark-start]
1273.334: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1273.334: [CMS-concurrent-preclean-start]
1273.335: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1273.335: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1278.341:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1278.341: [GC[YG occupancy: 80778 K (118016 K)]1278.341: [Rescan
(parallel) , 0.0101320 secs]1278.351: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 93628K(139444K), 0.0102460 secs]
[Times: user=0.12 sys=0.00, real=0.01 secs]
1278.351: [CMS-concurrent-sweep-start]
1278.353: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1278.353: [CMS-concurrent-reset-start]
1278.362: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1280.362: [GC [1 CMS-initial-mark: 12849K(21428K)] 93756K(139444K),
0.0105680 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1280.373: [CMS-concurrent-mark-start]
1280.388: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1280.388: [CMS-concurrent-preclean-start]
1280.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1280.388: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1285.400:
[CMS-concurrent-abortable-preclean: 0.706/5.012 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1285.400: [GC[YG occupancy: 81262 K (118016 K)]1285.400: [Rescan
(parallel) , 0.0093660 secs]1285.410: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 94111K(139444K), 0.0094820 secs]
[Times: user=0.12 sys=0.00, real=0.01 secs]
1285.410: [CMS-concurrent-sweep-start]
1285.411: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1285.411: [CMS-concurrent-reset-start]
1285.420: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1287.420: [GC [1 CMS-initial-mark: 12849K(21428K)] 94240K(139444K),
0.0105800 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1287.431: [CMS-concurrent-mark-start]
1287.447: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1287.447: [CMS-concurrent-preclean-start]
1287.447: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1287.447: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1292.460:
[CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1292.460: [GC[YG occupancy: 81710 K (118016 K)]1292.460: [Rescan
(parallel) , 0.0081130 secs]1292.468: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 94560K(139444K), 0.0082210 secs]
[Times: user=0.09 sys=0.00, real=0.01 secs]
1292.468: [CMS-concurrent-sweep-start]
1292.470: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1292.470: [CMS-concurrent-reset-start]
1292.480: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1292.712: [GC [1 CMS-initial-mark: 12849K(21428K)] 94624K(139444K),
0.0104870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1292.723: [CMS-concurrent-mark-start]
1292.739: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1292.739: [CMS-concurrent-preclean-start]
1292.740: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1292.740: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1297.748:
[CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1297.748: [GC[YG occupancy: 82135 K (118016 K)]1297.748: [Rescan
(parallel) , 0.0106180 secs]1297.759: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 94985K(139444K), 0.0107410 secs]
[Times: user=0.12 sys=0.00, real=0.01 secs]
1297.759: [CMS-concurrent-sweep-start]
1297.760: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1297.761: [CMS-concurrent-reset-start]
1297.769: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1299.769: [GC [1 CMS-initial-mark: 12849K(21428K)] 95113K(139444K),
0.0105340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1299.780: [CMS-concurrent-mark-start]
1299.796: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1299.796: [CMS-concurrent-preclean-start]
1299.797: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1299.797: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1304.805:
[CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
user=0.69 sys=0.00, real=5.01 secs]
1304.805: [GC[YG occupancy: 82583 K (118016 K)]1304.806: [Rescan
(parallel) , 0.0094010 secs]1304.815: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 95433K(139444K), 0.0095140 secs]
[Times: user=0.12 sys=0.00, real=0.01 secs]
1304.815: [CMS-concurrent-sweep-start]
1304.817: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1304.817: [CMS-concurrent-reset-start]
1304.827: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1306.827: [GC [1 CMS-initial-mark: 12849K(21428K)] 95561K(139444K),
0.0107300 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1306.838: [CMS-concurrent-mark-start]
1306.855: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1306.855: [CMS-concurrent-preclean-start]
1306.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1306.855: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1311.858:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1311.858: [GC[YG occupancy: 83032 K (118016 K)]1311.858: [Rescan
(parallel) , 0.0094210 secs]1311.867: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 95882K(139444K), 0.0095360 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1311.868: [CMS-concurrent-sweep-start]
1311.869: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1311.869: [CMS-concurrent-reset-start]
1311.877: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1313.878: [GC [1 CMS-initial-mark: 12849K(21428K)] 96010K(139444K),
0.0107870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1313.889: [CMS-concurrent-mark-start]
1313.905: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1313.905: [CMS-concurrent-preclean-start]
1313.906: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1313.906: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1318.914:
[CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1318.915: [GC[YG occupancy: 83481 K (118016 K)]1318.915: [Rescan
(parallel) , 0.0096280 secs]1318.924: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 96331K(139444K), 0.0097340 secs]
[Times: user=0.12 sys=0.00, real=0.01 secs]
1318.925: [CMS-concurrent-sweep-start]
1318.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1318.927: [CMS-concurrent-reset-start]
1318.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1320.936: [GC [1 CMS-initial-mark: 12849K(21428K)] 96459K(139444K),
0.0106300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1320.947: [CMS-concurrent-mark-start]
1320.964: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1320.964: [CMS-concurrent-preclean-start]
1320.965: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1320.965: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1325.991:
[CMS-concurrent-abortable-preclean: 0.717/5.026 secs] [Times:
user=0.73 sys=0.00, real=5.02 secs]
1325.991: [GC[YG occupancy: 84205 K (118016 K)]1325.991: [Rescan
(parallel) , 0.0097880 secs]1326.001: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 97055K(139444K), 0.0099010 secs]
[Times: user=0.12 sys=0.00, real=0.01 secs]
1326.001: [CMS-concurrent-sweep-start]
1326.003: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1326.003: [CMS-concurrent-reset-start]
1326.012: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1328.013: [GC [1 CMS-initial-mark: 12849K(21428K)] 97183K(139444K),
0.0109730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1328.024: [CMS-concurrent-mark-start]
1328.039: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1328.039: [CMS-concurrent-preclean-start]
1328.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1328.039: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1333.043:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
1333.043: [GC[YG occupancy: 84654 K (118016 K)]1333.043: [Rescan
(parallel) , 0.0110740 secs]1333.054: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 97504K(139444K), 0.0111760 secs]
[Times: user=0.12 sys=0.01, real=0.02 secs]
1333.054: [CMS-concurrent-sweep-start]
1333.056: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1333.056: [CMS-concurrent-reset-start]
1333.065: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1335.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 97632K(139444K),
0.0109300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1335.077: [CMS-concurrent-mark-start]
1335.094: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1335.094: [CMS-concurrent-preclean-start]
1335.094: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1335.094: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1340.103:
[CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1340.103: [GC[YG occupancy: 85203 K (118016 K)]1340.103: [Rescan
(parallel) , 0.0109470 secs]1340.114: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 98052K(139444K), 0.0110500 secs]
[Times: user=0.11 sys=0.01, real=0.02 secs]
1340.114: [CMS-concurrent-sweep-start]
1340.116: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1340.116: [CMS-concurrent-reset-start]
1340.125: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1342.126: [GC [1 CMS-initial-mark: 12849K(21428K)] 98181K(139444K),
0.0109170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1342.137: [CMS-concurrent-mark-start]
1342.154: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1342.154: [CMS-concurrent-preclean-start]
1342.154: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1342.154: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1347.161:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1347.162: [GC[YG occupancy: 85652 K (118016 K)]1347.162: [Rescan
(parallel) , 0.0075610 secs]1347.169: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 98502K(139444K), 0.0076680 secs]
[Times: user=0.10 sys=0.00, real=0.01 secs]
1347.169: [CMS-concurrent-sweep-start]
1347.171: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1347.172: [CMS-concurrent-reset-start]
1347.181: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1349.181: [GC [1 CMS-initial-mark: 12849K(21428K)] 98630K(139444K),
0.0109540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1349.192: [CMS-concurrent-mark-start]
1349.208: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1349.208: [CMS-concurrent-preclean-start]
1349.208: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1349.208: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1354.268:
[CMS-concurrent-abortable-preclean: 0.723/5.060 secs] [Times:
user=0.73 sys=0.00, real=5.06 secs]
1354.268: [GC[YG occupancy: 86241 K (118016 K)]1354.268: [Rescan
(parallel) , 0.0099530 secs]1354.278: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 99091K(139444K), 0.0100670 secs]
[Times: user=0.12 sys=0.00, real=0.01 secs]
1354.278: [CMS-concurrent-sweep-start]
1354.280: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1354.280: [CMS-concurrent-reset-start]
1354.288: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1356.289: [GC [1 CMS-initial-mark: 12849K(21428K)] 99219K(139444K),
0.0111450 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1356.300: [CMS-concurrent-mark-start]
1356.316: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1356.316: [CMS-concurrent-preclean-start]
1356.317: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1356.317: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1361.322:
[CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1361.322: [GC[YG occupancy: 86690 K (118016 K)]1361.322: [Rescan
(parallel) , 0.0097180 secs]1361.332: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 99540K(139444K), 0.0098210 secs]
[Times: user=0.10 sys=0.00, real=0.01 secs]
1361.332: [CMS-concurrent-sweep-start]
1361.333: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1361.333: [CMS-concurrent-reset-start]
1361.342: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1363.342: [GC [1 CMS-initial-mark: 12849K(21428K)] 99668K(139444K),
0.0110230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1363.354: [CMS-concurrent-mark-start]
1363.368: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1363.368: [CMS-concurrent-preclean-start]
1363.369: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1363.369: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1368.378:
[CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1368.378: [GC[YG occupancy: 87139 K (118016 K)]1368.378: [Rescan
(parallel) , 0.0100770 secs]1368.388: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 99989K(139444K), 0.0101900 secs]
[Times: user=0.13 sys=0.00, real=0.01 secs]
1368.388: [CMS-concurrent-sweep-start]
1368.390: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1368.390: [CMS-concurrent-reset-start]
1368.398: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1370.399: [GC [1 CMS-initial-mark: 12849K(21428K)] 100117K(139444K),
0.0111810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1370.410: [CMS-concurrent-mark-start]
1370.426: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1370.426: [CMS-concurrent-preclean-start]
1370.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1370.427: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1375.447:
[CMS-concurrent-abortable-preclean: 0.715/5.020 secs] [Times:
user=0.72 sys=0.00, real=5.02 secs]
1375.447: [GC[YG occupancy: 87588 K (118016 K)]1375.447: [Rescan
(parallel) , 0.0101690 secs]1375.457: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 100438K(139444K), 0.0102730 secs]
[Times: user=0.13 sys=0.00, real=0.01 secs]
1375.457: [CMS-concurrent-sweep-start]
1375.459: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1375.459: [CMS-concurrent-reset-start]
1375.467: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1377.467: [GC [1 CMS-initial-mark: 12849K(21428K)] 100566K(139444K),
0.0110760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1377.478: [CMS-concurrent-mark-start]
1377.495: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1377.495: [CMS-concurrent-preclean-start]
1377.496: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1377.496: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1382.502:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.69 sys=0.01, real=5.00 secs]
1382.502: [GC[YG occupancy: 89213 K (118016 K)]1382.502: [Rescan
(parallel) , 0.0108630 secs]1382.513: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 102063K(139444K), 0.0109700 secs]
[Times: user=0.13 sys=0.00, real=0.01 secs]
1382.513: [CMS-concurrent-sweep-start]
1382.514: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1382.514: [CMS-concurrent-reset-start]
1382.523: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1382.743: [GC [1 CMS-initial-mark: 12849K(21428K)] 102127K(139444K),
0.0113140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1382.755: [CMS-concurrent-mark-start]
1382.773: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1382.773: [CMS-concurrent-preclean-start]
1382.774: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1382.774: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1387.777:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1387.777: [GC[YG occupancy: 89638 K (118016 K)]1387.777: [Rescan
(parallel) , 0.0113310 secs]1387.789: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 102488K(139444K), 0.0114430 secs]
[Times: user=0.13 sys=0.00, real=0.01 secs]
1387.789: [CMS-concurrent-sweep-start]
1387.790: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1387.790: [CMS-concurrent-reset-start]
1387.799: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1389.799: [GC [1 CMS-initial-mark: 12849K(21428K)] 102617K(139444K),
0.0113540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1389.810: [CMS-concurrent-mark-start]
1389.827: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1389.827: [CMS-concurrent-preclean-start]
1389.827: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1389.827: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1394.831:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1394.831: [GC[YG occupancy: 90088 K (118016 K)]1394.831: [Rescan
(parallel) , 0.0103790 secs]1394.841: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 102938K(139444K), 0.0104960 secs]
[Times: user=0.13 sys=0.00, real=0.01 secs]
1394.842: [CMS-concurrent-sweep-start]
1394.844: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1394.844: [CMS-concurrent-reset-start]
1394.853: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1396.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 103066K(139444K),
0.0114740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1396.865: [CMS-concurrent-mark-start]
1396.880: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1396.880: [CMS-concurrent-preclean-start]
1396.881: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1396.881: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1401.890:
[CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1401.890: [GC[YG occupancy: 90537 K (118016 K)]1401.891: [Rescan
(parallel) , 0.0116110 secs]1401.902: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 103387K(139444K), 0.0117240 secs]
[Times: user=0.13 sys=0.00, real=0.01 secs]
1401.902: [CMS-concurrent-sweep-start]
1401.904: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1401.904: [CMS-concurrent-reset-start]
1401.914: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1403.914: [GC [1 CMS-initial-mark: 12849K(21428K)] 103515K(139444K),
0.0111980 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1403.925: [CMS-concurrent-mark-start]
1403.943: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.01 secs]
1403.943: [CMS-concurrent-preclean-start]
1403.944: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.01 secs]
1403.944: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1408.982:
[CMS-concurrent-abortable-preclean: 0.718/5.038 secs] [Times:
user=0.72 sys=0.00, real=5.03 secs]
1408.982: [GC[YG occupancy: 90986 K (118016 K)]1408.982: [Rescan
(parallel) , 0.0115260 secs]1408.994: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 103836K(139444K), 0.0116320 secs]
[Times: user=0.13 sys=0.00, real=0.02 secs]
1408.994: [CMS-concurrent-sweep-start]
1408.996: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1408.996: [CMS-concurrent-reset-start]
1409.005: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1411.005: [GC [1 CMS-initial-mark: 12849K(21428K)] 103964K(139444K),
0.0114590 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1411.017: [CMS-concurrent-mark-start]
1411.034: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1411.034: [CMS-concurrent-preclean-start]
1411.034: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1411.034: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1416.140:
[CMS-concurrent-abortable-preclean: 0.712/5.105 secs] [Times:
user=0.71 sys=0.00, real=5.10 secs]
1416.140: [GC[YG occupancy: 91476 K (118016 K)]1416.140: [Rescan
(parallel) , 0.0114950 secs]1416.152: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 104326K(139444K), 0.0116020 secs]
[Times: user=0.13 sys=0.00, real=0.01 secs]
1416.152: [CMS-concurrent-sweep-start]
1416.154: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1416.154: [CMS-concurrent-reset-start]
1416.163: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1418.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 104454K(139444K),
0.0114040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1418.175: [CMS-concurrent-mark-start]
1418.191: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1418.191: [CMS-concurrent-preclean-start]
1418.191: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1418.191: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1423.198:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1423.199: [GC[YG occupancy: 91925 K (118016 K)]1423.199: [Rescan
(parallel) , 0.0105460 secs]1423.209: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 104775K(139444K), 0.0106640 secs]
[Times: user=0.13 sys=0.00, real=0.01 secs]
1423.209: [CMS-concurrent-sweep-start]
1423.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1423.211: [CMS-concurrent-reset-start]
1423.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1425.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 104903K(139444K),
0.0116300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1425.232: [CMS-concurrent-mark-start]
1425.248: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1425.248: [CMS-concurrent-preclean-start]
1425.248: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1425.248: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1430.252:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
1430.252: [GC[YG occupancy: 92374 K (118016 K)]1430.252: [Rescan
(parallel) , 0.0098720 secs]1430.262: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 105224K(139444K), 0.0099750 secs]
[Times: user=0.12 sys=0.00, real=0.01 secs]
1430.262: [CMS-concurrent-sweep-start]
1430.264: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1430.264: [CMS-concurrent-reset-start]
1430.273: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1432.274: [GC [1 CMS-initial-mark: 12849K(21428K)] 105352K(139444K),
0.0114050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1432.285: [CMS-concurrent-mark-start]
1432.301: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1432.301: [CMS-concurrent-preclean-start]
1432.301: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1432.301: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1437.304:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
1437.305: [GC[YG occupancy: 92823 K (118016 K)]1437.305: [Rescan
(parallel) , 0.0115010 secs]1437.316: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 105673K(139444K), 0.0116090 secs]
[Times: user=0.14 sys=0.00, real=0.01 secs]
1437.316: [CMS-concurrent-sweep-start]
1437.319: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1437.319: [CMS-concurrent-reset-start]
1437.328: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1439.328: [GC [1 CMS-initial-mark: 12849K(21428K)] 105801K(139444K),
0.0115740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1439.340: [CMS-concurrent-mark-start]
1439.356: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1439.356: [CMS-concurrent-preclean-start]
1439.356: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1439.356: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1444.411:
[CMS-concurrent-abortable-preclean: 0.715/5.054 secs] [Times:
user=0.72 sys=0.00, real=5.05 secs]
1444.411: [GC[YG occupancy: 93547 K (118016 K)]1444.411: [Rescan
(parallel) , 0.0072910 secs]1444.418: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 106397K(139444K), 0.0073970 secs]
[Times: user=0.09 sys=0.00, real=0.01 secs]
1444.419: [CMS-concurrent-sweep-start]
1444.420: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1444.420: [CMS-concurrent-reset-start]
1444.429: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1446.429: [GC [1 CMS-initial-mark: 12849K(21428K)] 106525K(139444K),
0.0117950 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1446.441: [CMS-concurrent-mark-start]
1446.457: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1446.457: [CMS-concurrent-preclean-start]
1446.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1446.458: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1451.461:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1451.461: [GC[YG occupancy: 93996 K (118016 K)]1451.461: [Rescan
(parallel) , 0.0120870 secs]1451.473: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 106846K(139444K), 0.0121920 secs]
[Times: user=0.14 sys=0.00, real=0.02 secs]
1451.473: [CMS-concurrent-sweep-start]
1451.476: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1451.476: [CMS-concurrent-reset-start]
1451.485: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1453.485: [GC [1 CMS-initial-mark: 12849K(21428K)] 106974K(139444K),
0.0117990 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1453.497: [CMS-concurrent-mark-start]
1453.514: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1453.514: [CMS-concurrent-preclean-start]
1453.515: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1453.515: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1458.518:
[CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1458.518: [GC[YG occupancy: 94445 K (118016 K)]1458.518: [Rescan
(parallel) , 0.0123720 secs]1458.530: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 107295K(139444K), 0.0124750 secs]
[Times: user=0.14 sys=0.00, real=0.01 secs]
1458.530: [CMS-concurrent-sweep-start]
1458.532: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1458.532: [CMS-concurrent-reset-start]
1458.540: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1460.541: [GC [1 CMS-initial-mark: 12849K(21428K)] 107423K(139444K),
0.0118680 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1460.553: [CMS-concurrent-mark-start]
1460.568: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1460.568: [CMS-concurrent-preclean-start]
1460.569: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1460.569: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1465.577:
[CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1465.577: [GC[YG occupancy: 94894 K (118016 K)]1465.577: [Rescan
(parallel) , 0.0119100 secs]1465.589: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 107744K(139444K), 0.0120270 secs]
[Times: user=0.14 sys=0.00, real=0.01 secs]
1465.590: [CMS-concurrent-sweep-start]
1465.591: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1465.591: [CMS-concurrent-reset-start]
1465.600: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1467.600: [GC [1 CMS-initial-mark: 12849K(21428K)] 107937K(139444K),
0.0120020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1467.612: [CMS-concurrent-mark-start]
1467.628: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1467.628: [CMS-concurrent-preclean-start]
1467.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1467.628: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1472.636:
[CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1472.637: [GC[YG occupancy: 95408 K (118016 K)]1472.637: [Rescan
(parallel) , 0.0119090 secs]1472.649: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 108257K(139444K), 0.0120260 secs]
[Times: user=0.13 sys=0.00, real=0.01 secs]
1472.649: [CMS-concurrent-sweep-start]
1472.650: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1472.650: [CMS-concurrent-reset-start]
1472.659: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1472.775: [GC [1 CMS-initial-mark: 12849K(21428K)] 108365K(139444K),
0.0120260 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1472.787: [CMS-concurrent-mark-start]
1472.805: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1472.805: [CMS-concurrent-preclean-start]
1472.806: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.01 sys=0.00, real=0.00 secs]
1472.806: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1477.808:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
1477.808: [GC[YG occupancy: 95876 K (118016 K)]1477.808: [Rescan
(parallel) , 0.0099490 secs]1477.818: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 108726K(139444K), 0.0100580 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1477.818: [CMS-concurrent-sweep-start]
1477.820: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1477.820: [CMS-concurrent-reset-start]
1477.828: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1479.829: [GC [1 CMS-initial-mark: 12849K(21428K)] 108854K(139444K),
0.0119550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1479.841: [CMS-concurrent-mark-start]
1479.857: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1479.857: [CMS-concurrent-preclean-start]
1479.857: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1479.857: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1484.870:
[CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1484.870: [GC[YG occupancy: 96325 K (118016 K)]1484.870: [Rescan
(parallel) , 0.0122870 secs]1484.882: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 109175K(139444K), 0.0123900 secs]
[Times: user=0.14 sys=0.00, real=0.01 secs]
1484.882: [CMS-concurrent-sweep-start]
1484.884: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1484.884: [CMS-concurrent-reset-start]
1484.893: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1486.893: [GC [1 CMS-initial-mark: 12849K(21428K)] 109304K(139444K),
0.0118470 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
1486.905: [CMS-concurrent-mark-start]
1486.921: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.01 secs]
1486.921: [CMS-concurrent-preclean-start]
1486.921: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1486.921: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1491.968:
[CMS-concurrent-abortable-preclean: 0.720/5.047 secs] [Times:
user=0.72 sys=0.00, real=5.05 secs]
1491.968: [GC[YG occupancy: 96774 K (118016 K)]1491.968: [Rescan
(parallel) , 0.0122850 secs]1491.981: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 109624K(139444K), 0.0123880 secs]
[Times: user=0.14 sys=0.00, real=0.01 secs]
1491.981: [CMS-concurrent-sweep-start]
1491.982: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1491.982: [CMS-concurrent-reset-start]
1491.991: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1493.991: [GC [1 CMS-initial-mark: 12849K(21428K)] 109753K(139444K),
0.0119790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1494.004: [CMS-concurrent-mark-start]
1494.019: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1494.019: [CMS-concurrent-preclean-start]
1494.019: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1494.019: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1499.100:
[CMS-concurrent-abortable-preclean: 0.722/5.080 secs] [Times:
user=0.72 sys=0.00, real=5.08 secs]
1499.100: [GC[YG occupancy: 98295 K (118016 K)]1499.100: [Rescan
(parallel) , 0.0123180 secs]1499.112: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 111145K(139444K), 0.0124240 secs]
[Times: user=0.14 sys=0.00, real=0.01 secs]
1499.113: [CMS-concurrent-sweep-start]
1499.114: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1499.114: [CMS-concurrent-reset-start]
1499.123: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1501.123: [GC [1 CMS-initial-mark: 12849K(21428K)] 111274K(139444K),
0.0117720 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
1501.135: [CMS-concurrent-mark-start]
1501.150: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
1501.150: [CMS-concurrent-preclean-start]
1501.151: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.01 sys=0.00, real=0.00 secs]
1501.151: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1506.172:
[CMS-concurrent-abortable-preclean: 0.712/5.022 secs] [Times:
user=0.71 sys=0.00, real=5.02 secs]
1506.172: [GC[YG occupancy: 98890 K (118016 K)]1506.173: [Rescan
(parallel) , 0.0113790 secs]1506.184: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 111740K(139444K), 0.0114830 secs]
[Times: user=0.13 sys=0.00, real=0.02 secs]
1506.184: [CMS-concurrent-sweep-start]
1506.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1506.186: [CMS-concurrent-reset-start]
1506.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1508.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 111868K(139444K),
0.0122930 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1508.208: [CMS-concurrent-mark-start]
1508.225: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1508.225: [CMS-concurrent-preclean-start]
1508.225: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1508.226: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1513.232:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1513.232: [GC[YG occupancy: 99339 K (118016 K)]1513.232: [Rescan
(parallel) , 0.0123890 secs]1513.244: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 112189K(139444K), 0.0124930 secs]
[Times: user=0.14 sys=0.00, real=0.02 secs]
1513.245: [CMS-concurrent-sweep-start]
1513.246: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1513.246: [CMS-concurrent-reset-start]
1513.255: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1515.256: [GC [1 CMS-initial-mark: 12849K(21428K)] 113182K(139444K),
0.0123210 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1515.268: [CMS-concurrent-mark-start]
1515.285: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1515.285: [CMS-concurrent-preclean-start]
1515.285: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1515.285: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1520.290:
[CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1520.290: [GC[YG occupancy: 100653 K (118016 K)]1520.290: [Rescan
(parallel) , 0.0125490 secs]1520.303: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 113502K(139444K), 0.0126520 secs]
[Times: user=0.14 sys=0.00, real=0.01 secs]
1520.303: [CMS-concurrent-sweep-start]
1520.304: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1520.304: [CMS-concurrent-reset-start]
1520.313: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1522.314: [GC [1 CMS-initial-mark: 12849K(21428K)] 113631K(139444K),
0.0118790 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1522.326: [CMS-concurrent-mark-start]
1522.343: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.01 secs]
1522.343: [CMS-concurrent-preclean-start]
1522.343: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1522.343: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1527.350:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1527.350: [GC[YG occupancy: 101102 K (118016 K)]1527.350: [Rescan
(parallel) , 0.0127460 secs]1527.363: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 113952K(139444K), 0.0128490 secs]
[Times: user=0.15 sys=0.00, real=0.01 secs]
1527.363: [CMS-concurrent-sweep-start]
1527.365: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1527.365: [CMS-concurrent-reset-start]
1527.374: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1529.374: [GC [1 CMS-initial-mark: 12849K(21428K)] 114080K(139444K),
0.0117550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1529.386: [CMS-concurrent-mark-start]
1529.403: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1529.404: [CMS-concurrent-preclean-start]
1529.404: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1529.404: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1534.454:
[CMS-concurrent-abortable-preclean: 0.712/5.050 secs] [Times:
user=0.70 sys=0.01, real=5.05 secs]
1534.454: [GC[YG occupancy: 101591 K (118016 K)]1534.454: [Rescan
(parallel) , 0.0122680 secs]1534.466: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 114441K(139444K), 0.0123750 secs]
[Times: user=0.12 sys=0.02, real=0.01 secs]
1534.466: [CMS-concurrent-sweep-start]
1534.468: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1534.468: [CMS-concurrent-reset-start]
1534.478: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1536.478: [GC [1 CMS-initial-mark: 12849K(21428K)] 114570K(139444K),
0.0125250 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1536.491: [CMS-concurrent-mark-start]
1536.507: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1536.507: [CMS-concurrent-preclean-start]
1536.507: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1536.507: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1541.516:
[CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1541.516: [GC[YG occupancy: 102041 K (118016 K)]1541.516: [Rescan
(parallel) , 0.0088270 secs]1541.525: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 114890K(139444K), 0.0089300 secs]
[Times: user=0.10 sys=0.00, real=0.01 secs]
1541.525: [CMS-concurrent-sweep-start]
1541.527: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1541.527: [CMS-concurrent-reset-start]
1541.537: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1543.537: [GC [1 CMS-initial-mark: 12849K(21428K)] 115019K(139444K),
0.0124500 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1543.550: [CMS-concurrent-mark-start]
1543.566: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1543.566: [CMS-concurrent-preclean-start]
1543.567: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1543.567: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1548.578:
[CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1548.578: [GC[YG occupancy: 102490 K (118016 K)]1548.578: [Rescan
(parallel) , 0.0100430 secs]1548.588: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 115340K(139444K), 0.0101440 secs]
[Times: user=0.11 sys=0.00, real=0.01 secs]
1548.588: [CMS-concurrent-sweep-start]
1548.589: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1548.589: [CMS-concurrent-reset-start]
1548.598: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1550.598: [GC [1 CMS-initial-mark: 12849K(21428K)] 115468K(139444K),
0.0125070 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1550.611: [CMS-concurrent-mark-start]
1550.627: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1550.627: [CMS-concurrent-preclean-start]
1550.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1550.628: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1555.631:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
1555.631: [GC[YG occupancy: 103003 K (118016 K)]1555.631: [Rescan
(parallel) , 0.0117610 secs]1555.643: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 115853K(139444K), 0.0118770 secs]
[Times: user=0.15 sys=0.00, real=0.01 secs]
1555.643: [CMS-concurrent-sweep-start]
1555.645: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1555.645: [CMS-concurrent-reset-start]
1555.655: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1557.655: [GC [1 CMS-initial-mark: 12849K(21428K)] 115981K(139444K),
0.0126720 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1557.668: [CMS-concurrent-mark-start]
1557.685: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1557.685: [CMS-concurrent-preclean-start]
1557.685: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1557.685: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1562.688:
[CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1562.688: [GC[YG occupancy: 103557 K (118016 K)]1562.688: [Rescan
(parallel) , 0.0121530 secs]1562.700: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 116407K(139444K), 0.0122560 secs]
[Times: user=0.16 sys=0.00, real=0.01 secs]
1562.700: [CMS-concurrent-sweep-start]
1562.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1562.701: [CMS-concurrent-reset-start]
1562.710: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1562.821: [GC [1 CMS-initial-mark: 12849K(21428K)] 116514K(139444K),
0.0127240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1562.834: [CMS-concurrent-mark-start]
1562.852: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
sys=0.00, real=0.01 secs]
1562.852: [CMS-concurrent-preclean-start]
1562.853: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1562.853: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1567.859:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1567.859: [GC[YG occupancy: 104026 K (118016 K)]1567.859: [Rescan
(parallel) , 0.0131290 secs]1567.872: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 116876K(139444K), 0.0132470 secs]
[Times: user=0.15 sys=0.00, real=0.01 secs]
1567.873: [CMS-concurrent-sweep-start]
1567.874: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1567.874: [CMS-concurrent-reset-start]
1567.883: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1569.883: [GC [1 CMS-initial-mark: 12849K(21428K)] 117103K(139444K),
0.0123770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
1569.896: [CMS-concurrent-mark-start]
1569.913: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.01 secs]
1569.913: [CMS-concurrent-preclean-start]
1569.913: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.01 secs]
1569.913: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1574.920:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1574.920: [GC[YG occupancy: 104510 K (118016 K)]1574.920: [Rescan
(parallel) , 0.0122810 secs]1574.932: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 117360K(139444K), 0.0123870 secs]
[Times: user=0.15 sys=0.00, real=0.01 secs]
1574.933: [CMS-concurrent-sweep-start]
1574.935: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1574.935: [CMS-concurrent-reset-start]
1574.944: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1575.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 117360K(139444K),
0.0121590 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
1575.176: [CMS-concurrent-mark-start]
1575.193: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1575.193: [CMS-concurrent-preclean-start]
1575.193: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.01 secs]
1575.193: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1580.197:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.71 sys=0.00, real=5.00 secs]
1580.197: [GC[YG occupancy: 104831 K (118016 K)]1580.197: [Rescan
(parallel) , 0.0129860 secs]1580.210: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 117681K(139444K), 0.0130980 secs]
[Times: user=0.15 sys=0.00, real=0.01 secs]
1580.210: [CMS-concurrent-sweep-start]
1580.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1580.211: [CMS-concurrent-reset-start]
1580.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1582.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 117809K(139444K),
0.0129700 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1582.234: [CMS-concurrent-mark-start]
1582.249: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
sys=0.01, real=0.02 secs]
1582.249: [CMS-concurrent-preclean-start]
1582.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1582.249: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1587.262:
[CMS-concurrent-abortable-preclean: 0.707/5.013 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1587.262: [GC[YG occupancy: 105280 K (118016 K)]1587.262: [Rescan
(parallel) , 0.0134570 secs]1587.276: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 118130K(139444K), 0.0135720 secs]
[Times: user=0.15 sys=0.00, real=0.02 secs]
1587.276: [CMS-concurrent-sweep-start]
1587.278: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1587.278: [CMS-concurrent-reset-start]
1587.287: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1589.287: [GC [1 CMS-initial-mark: 12849K(21428K)] 118258K(139444K),
0.0130010 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1589.301: [CMS-concurrent-mark-start]
1589.316: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1589.316: [CMS-concurrent-preclean-start]
1589.316: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1589.316: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1594.364:
[CMS-concurrent-abortable-preclean: 0.712/5.048 secs] [Times:
user=0.71 sys=0.00, real=5.05 secs]
1594.365: [GC[YG occupancy: 105770 K (118016 K)]1594.365: [Rescan
(parallel) , 0.0131190 secs]1594.378: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 118620K(139444K), 0.0132380 secs]
[Times: user=0.15 sys=0.00, real=0.01 secs]
1594.378: [CMS-concurrent-sweep-start]
1594.380: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1594.380: [CMS-concurrent-reset-start]
1594.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1596.390: [GC [1 CMS-initial-mark: 12849K(21428K)] 118748K(139444K),
0.0130650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1596.403: [CMS-concurrent-mark-start]
1596.418: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1596.418: [CMS-concurrent-preclean-start]
1596.419: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1596.419: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1601.422:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.69 sys=0.01, real=5.00 secs]
1601.422: [GC[YG occupancy: 106219 K (118016 K)]1601.422: [Rescan
(parallel) , 0.0130310 secs]1601.435: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 119069K(139444K), 0.0131490 secs]
[Times: user=0.16 sys=0.00, real=0.02 secs]
1601.435: [CMS-concurrent-sweep-start]
1601.437: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1601.437: [CMS-concurrent-reset-start]
1601.446: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1603.447: [GC [1 CMS-initial-mark: 12849K(21428K)] 119197K(139444K),
0.0130220 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1603.460: [CMS-concurrent-mark-start]
1603.476: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1603.476: [CMS-concurrent-preclean-start]
1603.476: [CMS-concurrent-preclean: 0.000/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1603.476: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1608.478:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1608.478: [GC[YG occupancy: 106668 K (118016 K)]1608.479: [Rescan
(parallel) , 0.0122680 secs]1608.491: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 119518K(139444K), 0.0123790 secs]
[Times: user=0.15 sys=0.00, real=0.01 secs]
1608.491: [CMS-concurrent-sweep-start]
1608.492: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1608.492: [CMS-concurrent-reset-start]
1608.501: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1610.502: [GC [1 CMS-initial-mark: 12849K(21428K)] 119646K(139444K),
0.0130770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
1610.515: [CMS-concurrent-mark-start]
1610.530: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1610.530: [CMS-concurrent-preclean-start]
1610.530: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1610.530: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1615.536:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1615.536: [GC[YG occupancy: 107117 K (118016 K)]1615.536: [Rescan
(parallel) , 0.0125470 secs]1615.549: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 119967K(139444K), 0.0126510 secs]
[Times: user=0.15 sys=0.00, real=0.01 secs]
1615.549: [CMS-concurrent-sweep-start]
1615.551: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1615.551: [CMS-concurrent-reset-start]
1615.561: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1617.561: [GC [1 CMS-initial-mark: 12849K(21428K)] 120095K(139444K),
0.0129520 secs] [Times: user=0.00 sys=0.00, real=0.02 secs]
1617.574: [CMS-concurrent-mark-start]
1617.591: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.01 secs]
1617.591: [CMS-concurrent-preclean-start]
1617.591: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1617.591: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1622.598:
[CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1622.598: [GC[YG occupancy: 107777 K (118016 K)]1622.599: [Rescan
(parallel) , 0.0140340 secs]1622.613: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 120627K(139444K), 0.0141520 secs]
[Times: user=0.16 sys=0.00, real=0.01 secs]
1622.613: [CMS-concurrent-sweep-start]
1622.614: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1622.614: [CMS-concurrent-reset-start]
1622.623: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.02 secs]
1622.848: [GC [1 CMS-initial-mark: 12849K(21428K)] 120691K(139444K),
0.0133410 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1622.861: [CMS-concurrent-mark-start]
1622.878: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1622.878: [CMS-concurrent-preclean-start]
1622.879: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1622.879: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1627.941:
[CMS-concurrent-abortable-preclean: 0.656/5.062 secs] [Times:
user=0.65 sys=0.00, real=5.06 secs]
1627.941: [GC[YG occupancy: 108202 K (118016 K)]1627.941: [Rescan
(parallel) , 0.0135120 secs]1627.955: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 121052K(139444K), 0.0136620 secs]
[Times: user=0.15 sys=0.00, real=0.02 secs]
1627.955: [CMS-concurrent-sweep-start]
1627.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1627.956: [CMS-concurrent-reset-start]
1627.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1629.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 121180K(139444K),
0.0133770 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1629.979: [CMS-concurrent-mark-start]
1629.995: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1629.995: [CMS-concurrent-preclean-start]
1629.996: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1629.996: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1634.998:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.69 sys=0.00, real=5.00 secs]
1634.999: [GC[YG occupancy: 108651 K (118016 K)]1634.999: [Rescan
(parallel) , 0.0134300 secs]1635.012: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 121501K(139444K), 0.0135530 secs]
[Times: user=0.16 sys=0.00, real=0.01 secs]
1635.012: [CMS-concurrent-sweep-start]
1635.014: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1635.014: [CMS-concurrent-reset-start]
1635.023: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1637.023: [GC [1 CMS-initial-mark: 12849K(21428K)] 121629K(139444K),
0.0127330 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
1637.036: [CMS-concurrent-mark-start]
1637.053: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1637.054: [CMS-concurrent-preclean-start]
1637.054: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1637.054: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1642.062:
[CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1642.062: [GC[YG occupancy: 109100 K (118016 K)]1642.062: [Rescan
(parallel) , 0.0124310 secs]1642.075: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 121950K(139444K), 0.0125510 secs]
[Times: user=0.16 sys=0.00, real=0.02 secs]
1642.075: [CMS-concurrent-sweep-start]
1642.077: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1642.077: [CMS-concurrent-reset-start]
1642.086: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1644.087: [GC [1 CMS-initial-mark: 12849K(21428K)] 122079K(139444K),
0.0134300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1644.100: [CMS-concurrent-mark-start]
1644.116: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1644.116: [CMS-concurrent-preclean-start]
1644.116: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1644.116: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1649.125:
[CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1649.126: [GC[YG occupancy: 109549 K (118016 K)]1649.126: [Rescan
(parallel) , 0.0126870 secs]1649.138: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 122399K(139444K), 0.0128010 secs]
[Times: user=0.15 sys=0.00, real=0.01 secs]
1649.139: [CMS-concurrent-sweep-start]
1649.141: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1649.141: [CMS-concurrent-reset-start]
1649.150: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1651.150: [GC [1 CMS-initial-mark: 12849K(21428K)] 122528K(139444K),
0.0134790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1651.164: [CMS-concurrent-mark-start]
1651.179: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1651.179: [CMS-concurrent-preclean-start]
1651.179: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1651.179: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1656.254:
[CMS-concurrent-abortable-preclean: 0.722/5.074 secs] [Times:
user=0.71 sys=0.01, real=5.07 secs]
1656.254: [GC[YG occupancy: 110039 K (118016 K)]1656.254: [Rescan
(parallel) , 0.0092110 secs]1656.263: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 122889K(139444K), 0.0093170 secs]
[Times: user=0.12 sys=0.00, real=0.01 secs]
1656.263: [CMS-concurrent-sweep-start]
1656.266: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1656.266: [CMS-concurrent-reset-start]
1656.275: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1658.275: [GC [1 CMS-initial-mark: 12849K(21428K)] 123017K(139444K),
0.0134150 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1658.289: [CMS-concurrent-mark-start]
1658.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1658.306: [CMS-concurrent-preclean-start]
1658.306: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1658.306: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1663.393:
[CMS-concurrent-abortable-preclean: 0.711/5.087 secs] [Times:
user=0.71 sys=0.00, real=5.08 secs]
1663.393: [GC[YG occupancy: 110488 K (118016 K)]1663.393: [Rescan
(parallel) , 0.0132450 secs]1663.406: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 123338K(139444K), 0.0133600 secs]
[Times: user=0.15 sys=0.00, real=0.02 secs]
1663.407: [CMS-concurrent-sweep-start]
1663.409: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1663.409: [CMS-concurrent-reset-start]
1663.418: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1665.418: [GC [1 CMS-initial-mark: 12849K(21428K)] 123467K(139444K),
0.0135570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1665.432: [CMS-concurrent-mark-start]
1665.447: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1665.447: [CMS-concurrent-preclean-start]
1665.448: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1665.448: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1670.457:
[CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1670.457: [GC[YG occupancy: 110937 K (118016 K)]1670.457: [Rescan
(parallel) , 0.0142820 secs]1670.471: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 123787K(139444K), 0.0144010 secs]
[Times: user=0.16 sys=0.00, real=0.01 secs]
1670.472: [CMS-concurrent-sweep-start]
1670.473: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1670.473: [CMS-concurrent-reset-start]
1670.482: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1672.482: [GC [1 CMS-initial-mark: 12849K(21428K)] 123916K(139444K),
0.0136110 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
1672.496: [CMS-concurrent-mark-start]
1672.513: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1672.513: [CMS-concurrent-preclean-start]
1672.513: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1672.513: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1677.530:
[CMS-concurrent-abortable-preclean: 0.711/5.017 secs] [Times:
user=0.71 sys=0.00, real=5.02 secs]
1677.530: [GC[YG occupancy: 111387 K (118016 K)]1677.530: [Rescan
(parallel) , 0.0129210 secs]1677.543: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 124236K(139444K), 0.0130360 secs]
[Times: user=0.16 sys=0.00, real=0.02 secs]
1677.543: [CMS-concurrent-sweep-start]
1677.545: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1677.545: [CMS-concurrent-reset-start]
1677.554: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1679.554: [GC [1 CMS-initial-mark: 12849K(21428K)] 124365K(139444K),
0.0125140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1679.567: [CMS-concurrent-mark-start]
1679.584: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1679.584: [CMS-concurrent-preclean-start]
1679.584: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1679.584: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1684.631:
[CMS-concurrent-abortable-preclean: 0.714/5.047 secs] [Times:
user=0.72 sys=0.00, real=5.04 secs]
1684.631: [GC[YG occupancy: 112005 K (118016 K)]1684.631: [Rescan
(parallel) , 0.0146760 secs]1684.646: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 124855K(139444K), 0.0147930 secs]
[Times: user=0.16 sys=0.00, real=0.02 secs]
1684.646: [CMS-concurrent-sweep-start]
1684.648: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1684.648: [CMS-concurrent-reset-start]
1684.656: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1686.656: [GC [1 CMS-initial-mark: 12849K(21428K)] 125048K(139444K),
0.0138340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1686.670: [CMS-concurrent-mark-start]
1686.686: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1686.686: [CMS-concurrent-preclean-start]
1686.687: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1686.687: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1691.689:
[CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1691.689: [GC[YG occupancy: 112518 K (118016 K)]1691.689: [Rescan
(parallel) , 0.0142600 secs]1691.703: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 12849K(21428K)] 125368K(139444K), 0.0143810 secs]
[Times: user=0.16 sys=0.00, real=0.02 secs]
1691.703: [CMS-concurrent-sweep-start]
1691.705: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1691.705: [CMS-concurrent-reset-start]
1691.714: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1693.714: [GC [1 CMS-initial-mark: 12849K(21428K)] 125497K(139444K),
0.0126710 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1693.727: [CMS-concurrent-mark-start]
1693.744: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1693.744: [CMS-concurrent-preclean-start]
1693.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1693.745: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1698.747:
[CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1698.748: [GC[YG occupancy: 112968 K (118016 K)]1698.748: [Rescan
(parallel) , 0.0147370 secs]1698.762: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 125818K(139444K), 0.0148490 secs]
[Times: user=0.17 sys=0.00, real=0.01 secs]
1698.763: [CMS-concurrent-sweep-start]
1698.764: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1698.764: [CMS-concurrent-reset-start]
1698.773: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1700.773: [GC [1 CMS-initial-mark: 12849K(21428K)] 125946K(139444K),
0.0128810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1700.786: [CMS-concurrent-mark-start]
1700.804: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1700.804: [CMS-concurrent-preclean-start]
1700.804: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1700.804: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1705.810:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1705.810: [GC[YG occupancy: 113417 K (118016 K)]1705.810: [Rescan
(parallel) , 0.0146750 secs]1705.825: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 126267K(139444K), 0.0147760 secs]
[Times: user=0.17 sys=0.00, real=0.02 secs]
1705.825: [CMS-concurrent-sweep-start]
1705.827: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1705.827: [CMS-concurrent-reset-start]
1705.836: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1707.836: [GC [1 CMS-initial-mark: 12849K(21428K)] 126395K(139444K),
0.0137570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1707.850: [CMS-concurrent-mark-start]
1707.866: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1707.866: [CMS-concurrent-preclean-start]
1707.867: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1707.867: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1712.878:
[CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1712.878: [GC[YG occupancy: 113866 K (118016 K)]1712.878: [Rescan
(parallel) , 0.0116340 secs]1712.890: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 126716K(139444K), 0.0117350 secs]
[Times: user=0.12 sys=0.00, real=0.01 secs]
1712.890: [CMS-concurrent-sweep-start]
1712.893: [CMS-concurrent-sweep: 0.002/0.003 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1712.893: [CMS-concurrent-reset-start]
1712.902: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1714.902: [GC [1 CMS-initial-mark: 12849K(21428K)] 126984K(139444K),
0.0134590 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
1714.915: [CMS-concurrent-mark-start]
1714.933: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1714.933: [CMS-concurrent-preclean-start]
1714.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1714.934: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1719.940:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.71 sys=0.00, real=5.00 secs]
1719.940: [GC[YG occupancy: 114552 K (118016 K)]1719.940: [Rescan
(parallel) , 0.0141320 secs]1719.955: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 127402K(139444K), 0.0142280 secs]
[Times: user=0.16 sys=0.01, real=0.02 secs]
1719.955: [CMS-concurrent-sweep-start]
1719.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1719.956: [CMS-concurrent-reset-start]
1719.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1721.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 127530K(139444K),
0.0139120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1721.980: [CMS-concurrent-mark-start]
1721.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1721.996: [CMS-concurrent-preclean-start]
1721.997: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1721.997: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1727.010:
[CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
user=0.71 sys=0.00, real=5.01 secs]
1727.010: [GC[YG occupancy: 115000 K (118016 K)]1727.010: [Rescan
(parallel) , 0.0123190 secs]1727.023: [weak refs processing, 0.0000130
secs] [1 CMS-remark: 12849K(21428K)] 127850K(139444K), 0.0124420 secs]
[Times: user=0.15 sys=0.00, real=0.01 secs]
1727.023: [CMS-concurrent-sweep-start]
1727.024: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1727.024: [CMS-concurrent-reset-start]
1727.033: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1729.034: [GC [1 CMS-initial-mark: 12849K(21428K)] 127978K(139444K),
0.0129330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1729.047: [CMS-concurrent-mark-start]
1729.064: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1729.064: [CMS-concurrent-preclean-start]
1729.064: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1729.064: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1734.075:
[CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1734.075: [GC[YG occupancy: 115449 K (118016 K)]1734.075: [Rescan
(parallel) , 0.0131600 secs]1734.088: [weak refs processing, 0.0000130
secs] [1 CMS-remark: 12849K(21428K)] 128298K(139444K), 0.0132810 secs]
[Times: user=0.16 sys=0.00, real=0.01 secs]
1734.089: [CMS-concurrent-sweep-start]
1734.091: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1734.091: [CMS-concurrent-reset-start]
1734.100: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1736.100: [GC [1 CMS-initial-mark: 12849K(21428K)] 128427K(139444K),
0.0141000 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
1736.115: [CMS-concurrent-mark-start]
1736.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1736.131: [CMS-concurrent-preclean-start]
1736.131: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1736.131: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1741.139:
[CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
user=0.70 sys=0.00, real=5.01 secs]
1741.139: [GC[YG occupancy: 115897 K (118016 K)]1741.139: [Rescan
(parallel) , 0.0146880 secs]1741.154: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 12849K(21428K)] 128747K(139444K), 0.0148020 secs]
[Times: user=0.17 sys=0.00, real=0.02 secs]
1741.154: [CMS-concurrent-sweep-start]
1741.156: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1741.156: [CMS-concurrent-reset-start]
1741.165: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1742.898: [GC [1 CMS-initial-mark: 12849K(21428K)] 129085K(139444K),
0.0144050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1742.913: [CMS-concurrent-mark-start]
1742.931: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1742.931: [CMS-concurrent-preclean-start]
1742.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1742.932: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1748.016:
[CMS-concurrent-abortable-preclean: 0.728/5.084 secs] [Times:
user=0.73 sys=0.00, real=5.09 secs]
1748.016: [GC[YG occupancy: 116596 K (118016 K)]1748.016: [Rescan
(parallel) , 0.0149950 secs]1748.031: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 129446K(139444K), 0.0150970 secs]
[Times: user=0.17 sys=0.00, real=0.01 secs]
1748.031: [CMS-concurrent-sweep-start]
1748.033: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1748.033: [CMS-concurrent-reset-start]
1748.041: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1750.042: [GC [1 CMS-initial-mark: 12849K(21428K)] 129574K(139444K),
0.0141840 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
1750.056: [CMS-concurrent-mark-start]
1750.073: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1750.073: [CMS-concurrent-preclean-start]
1750.074: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1750.074: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1755.080:
[CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
user=0.70 sys=0.00, real=5.00 secs]
1755.080: [GC[YG occupancy: 117044 K (118016 K)]1755.080: [Rescan
(parallel) , 0.0155560 secs]1755.096: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 12849K(21428K)] 129894K(139444K), 0.0156580 secs]
[Times: user=0.17 sys=0.00, real=0.02 secs]
1755.096: [CMS-concurrent-sweep-start]
1755.097: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1755.097: [CMS-concurrent-reset-start]
1755.105: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1756.660: [GC 1756.660: [ParNew: 117108K->482K(118016K), 0.0081410
secs] 129958K->24535K(144568K), 0.0083030 secs] [Times: user=0.05
sys=0.01, real=0.01 secs]
1756.668: [GC [1 CMS-initial-mark: 24053K(26552K)] 24599K(144568K),
0.0015280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1756.670: [CMS-concurrent-mark-start]
1756.688: [CMS-concurrent-mark: 0.016/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1756.688: [CMS-concurrent-preclean-start]
1756.689: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1756.689: [GC[YG occupancy: 546 K (118016 K)]1756.689: [Rescan
(parallel) , 0.0018170 secs]1756.691: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(26552K)] 24599K(144568K), 0.0019050 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1756.691: [CMS-concurrent-sweep-start]
1756.694: [CMS-concurrent-sweep: 0.004/0.004 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1756.694: [CMS-concurrent-reset-start]
1756.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1758.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 25372K(158108K),
0.0014030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1758.705: [CMS-concurrent-mark-start]
1758.720: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
sys=0.00, real=0.01 secs]
1758.720: [CMS-concurrent-preclean-start]
1758.720: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.01 sys=0.00, real=0.00 secs]
1758.721: [GC[YG occupancy: 1319 K (118016 K)]1758.721: [Rescan
(parallel) , 0.0014940 secs]1758.722: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 25372K(158108K), 0.0015850 secs]
[Times: user=0.00 sys=0.00, real=0.00 secs]
1758.722: [CMS-concurrent-sweep-start]
1758.726: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1758.726: [CMS-concurrent-reset-start]
1758.735: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1760.735: [GC [1 CMS-initial-mark: 24053K(40092K)] 25565K(158108K),
0.0014530 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1760.737: [CMS-concurrent-mark-start]
1760.755: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1760.755: [CMS-concurrent-preclean-start]
1760.755: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1760.756: [GC[YG occupancy: 1512 K (118016 K)]1760.756: [Rescan
(parallel) , 0.0014970 secs]1760.757: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 25565K(158108K), 0.0015980 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1760.757: [CMS-concurrent-sweep-start]
1760.761: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1760.761: [CMS-concurrent-reset-start]
1760.770: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1762.770: [GC [1 CMS-initial-mark: 24053K(40092K)] 25693K(158108K),
0.0013680 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1762.772: [CMS-concurrent-mark-start]
1762.788: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1762.788: [CMS-concurrent-preclean-start]
1762.788: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1762.788: [GC[YG occupancy: 1640 K (118016 K)]1762.789: [Rescan
(parallel) , 0.0020360 secs]1762.791: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 25693K(158108K), 0.0021450 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1762.791: [CMS-concurrent-sweep-start]
1762.794: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1762.794: [CMS-concurrent-reset-start]
1762.803: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1764.804: [GC [1 CMS-initial-mark: 24053K(40092K)] 26747K(158108K),
0.0014620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1764.805: [CMS-concurrent-mark-start]
1764.819: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1764.819: [CMS-concurrent-preclean-start]
1764.820: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1764.820: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1769.835:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.02 secs]
1769.835: [GC[YG occupancy: 3015 K (118016 K)]1769.835: [Rescan
(parallel) , 0.0010360 secs]1769.836: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 27068K(158108K), 0.0011310 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1769.837: [CMS-concurrent-sweep-start]
1769.840: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1769.840: [CMS-concurrent-reset-start]
1769.849: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1771.850: [GC [1 CMS-initial-mark: 24053K(40092K)] 27196K(158108K),
0.0014740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1771.851: [CMS-concurrent-mark-start]
1771.868: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1771.868: [CMS-concurrent-preclean-start]
1771.868: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1771.868: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1776.913:
[CMS-concurrent-abortable-preclean: 0.112/5.044 secs] [Times:
user=0.12 sys=0.00, real=5.04 secs]
1776.913: [GC[YG occupancy: 4052 K (118016 K)]1776.913: [Rescan
(parallel) , 0.0017790 secs]1776.915: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 24053K(40092K)] 28105K(158108K), 0.0018790 secs]
[Times: user=0.01 sys=0.00, real=0.01 secs]
1776.915: [CMS-concurrent-sweep-start]
1776.918: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1776.918: [CMS-concurrent-reset-start]
1776.927: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1778.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 28233K(158108K),
0.0015470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1778.929: [CMS-concurrent-mark-start]
1778.947: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1778.947: [CMS-concurrent-preclean-start]
1778.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1778.947: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1783.963:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
1783.963: [GC[YG occupancy: 4505 K (118016 K)]1783.963: [Rescan
(parallel) , 0.0014480 secs]1783.965: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 28558K(158108K), 0.0015470 secs]
[Times: user=0.00 sys=0.00, real=0.01 secs]
1783.965: [CMS-concurrent-sweep-start]
1783.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1783.968: [CMS-concurrent-reset-start]
1783.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1785.978: [GC [1 CMS-initial-mark: 24053K(40092K)] 28686K(158108K),
0.0015760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1785.979: [CMS-concurrent-mark-start]
1785.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1785.996: [CMS-concurrent-preclean-start]
1785.996: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1785.996: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1791.009:
[CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
1791.010: [GC[YG occupancy: 4954 K (118016 K)]1791.010: [Rescan
(parallel) , 0.0020030 secs]1791.012: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 24053K(40092K)] 29007K(158108K), 0.0021040 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1791.012: [CMS-concurrent-sweep-start]
1791.015: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1791.015: [CMS-concurrent-reset-start]
1791.023: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1793.023: [GC [1 CMS-initial-mark: 24053K(40092K)] 29136K(158108K),
0.0017200 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1793.025: [CMS-concurrent-mark-start]
1793.044: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.08
sys=0.00, real=0.02 secs]
1793.044: [CMS-concurrent-preclean-start]
1793.045: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1793.045: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1798.137:
[CMS-concurrent-abortable-preclean: 0.112/5.093 secs] [Times:
user=0.11 sys=0.00, real=5.09 secs]
1798.137: [GC[YG occupancy: 6539 K (118016 K)]1798.137: [Rescan
(parallel) , 0.0016650 secs]1798.139: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 24053K(40092K)] 30592K(158108K), 0.0017600 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1798.139: [CMS-concurrent-sweep-start]
1798.143: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1798.143: [CMS-concurrent-reset-start]
1798.152: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1800.152: [GC [1 CMS-initial-mark: 24053K(40092K)] 30721K(158108K),
0.0016650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1800.154: [CMS-concurrent-mark-start]
1800.170: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.01 secs]
1800.170: [CMS-concurrent-preclean-start]
1800.171: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1800.171: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1805.181:
[CMS-concurrent-abortable-preclean: 0.110/5.010 secs] [Times:
user=0.12 sys=0.00, real=5.01 secs]
1805.181: [GC[YG occupancy: 8090 K (118016 K)]1805.181: [Rescan
(parallel) , 0.0018850 secs]1805.183: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 24053K(40092K)] 32143K(158108K), 0.0019860 secs]
[Times: user=0.01 sys=0.00, real=0.01 secs]
1805.183: [CMS-concurrent-sweep-start]
1805.187: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1805.187: [CMS-concurrent-reset-start]
1805.196: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1807.196: [GC [1 CMS-initial-mark: 24053K(40092K)] 32272K(158108K),
0.0018760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1807.198: [CMS-concurrent-mark-start]
1807.216: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1807.216: [CMS-concurrent-preclean-start]
1807.216: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1807.216: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1812.232:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
1812.232: [GC[YG occupancy: 8543 K (118016 K)]1812.232: [Rescan
(parallel) , 0.0020890 secs]1812.234: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 32596K(158108K), 0.0021910 secs]
[Times: user=0.01 sys=0.00, real=0.01 secs]
1812.234: [CMS-concurrent-sweep-start]
1812.238: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1812.238: [CMS-concurrent-reset-start]
1812.247: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1812.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 32661K(158108K),
0.0019710 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1812.930: [CMS-concurrent-mark-start]
1812.947: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1812.947: [CMS-concurrent-preclean-start]
1812.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1812.948: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1817.963:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
1817.963: [GC[YG occupancy: 8928 K (118016 K)]1817.963: [Rescan
(parallel) , 0.0011790 secs]1817.964: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 32981K(158108K), 0.0012750 secs]
[Times: user=0.01 sys=0.00, real=0.01 secs]
1817.964: [CMS-concurrent-sweep-start]
1817.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1817.968: [CMS-concurrent-reset-start]
1817.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1819.977: [GC [1 CMS-initial-mark: 24053K(40092K)] 33110K(158108K),
0.0018900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1819.979: [CMS-concurrent-mark-start]
1819.996: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1819.997: [CMS-concurrent-preclean-start]
1819.997: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1819.997: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1825.012:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
1825.013: [GC[YG occupancy: 9377 K (118016 K)]1825.013: [Rescan
(parallel) , 0.0020580 secs]1825.015: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 33431K(158108K), 0.0021510 secs]
[Times: user=0.01 sys=0.00, real=0.01 secs]
1825.015: [CMS-concurrent-sweep-start]
1825.018: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1825.018: [CMS-concurrent-reset-start]
1825.027: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1827.028: [GC [1 CMS-initial-mark: 24053K(40092K)] 33559K(158108K),
0.0019140 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1827.030: [CMS-concurrent-mark-start]
1827.047: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1827.047: [CMS-concurrent-preclean-start]
1827.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1827.047: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1832.066:
[CMS-concurrent-abortable-preclean: 0.109/5.018 secs] [Times:
user=0.12 sys=0.00, real=5.02 secs]
1832.066: [GC[YG occupancy: 9827 K (118016 K)]1832.066: [Rescan
(parallel) , 0.0019440 secs]1832.068: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 33880K(158108K), 0.0020410 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1832.068: [CMS-concurrent-sweep-start]
1832.071: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1832.071: [CMS-concurrent-reset-start]
1832.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1832.935: [GC [1 CMS-initial-mark: 24053K(40092K)] 34093K(158108K),
0.0019830 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1832.937: [CMS-concurrent-mark-start]
1832.954: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1832.954: [CMS-concurrent-preclean-start]
1832.955: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1832.955: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1837.970:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
1837.970: [GC[YG occupancy: 10349 K (118016 K)]1837.970: [Rescan
(parallel) , 0.0019670 secs]1837.972: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 24053K(40092K)] 34402K(158108K), 0.0020800 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1837.972: [CMS-concurrent-sweep-start]
1837.976: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1837.976: [CMS-concurrent-reset-start]
1837.985: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1839.985: [GC [1 CMS-initial-mark: 24053K(40092K)] 34531K(158108K),
0.0020220 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1839.987: [CMS-concurrent-mark-start]
1840.005: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.06
sys=0.01, real=0.02 secs]
1840.005: [CMS-concurrent-preclean-start]
1840.006: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1840.006: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1845.018:
[CMS-concurrent-abortable-preclean: 0.106/5.012 secs] [Times:
user=0.10 sys=0.01, real=5.01 secs]
1845.018: [GC[YG occupancy: 10798 K (118016 K)]1845.018: [Rescan
(parallel) , 0.0015500 secs]1845.019: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 34851K(158108K), 0.0016500 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1845.020: [CMS-concurrent-sweep-start]
1845.023: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1845.023: [CMS-concurrent-reset-start]
1845.032: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1847.032: [GC [1 CMS-initial-mark: 24053K(40092K)] 34980K(158108K),
0.0020600 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1847.035: [CMS-concurrent-mark-start]
1847.051: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.01 secs]
1847.051: [CMS-concurrent-preclean-start]
1847.052: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1847.052: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1852.067:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.02 secs]
1852.067: [GC[YG occupancy: 11247 K (118016 K)]1852.067: [Rescan
(parallel) , 0.0011880 secs]1852.069: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 35300K(158108K), 0.0012900 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1852.069: [CMS-concurrent-sweep-start]
1852.072: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1852.072: [CMS-concurrent-reset-start]
1852.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1854.082: [GC [1 CMS-initial-mark: 24053K(40092K)] 35429K(158108K),
0.0021010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1854.084: [CMS-concurrent-mark-start]
1854.100: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1854.100: [CMS-concurrent-preclean-start]
1854.101: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1854.101: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1859.116:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.02 secs]
1859.116: [GC[YG occupancy: 11701 K (118016 K)]1859.117: [Rescan
(parallel) , 0.0010230 secs]1859.118: [weak refs processing, 0.0000130
secs] [1 CMS-remark: 24053K(40092K)] 35754K(158108K), 0.0011230 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1859.118: [CMS-concurrent-sweep-start]
1859.121: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1859.121: [CMS-concurrent-reset-start]
1859.130: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1861.131: [GC [1 CMS-initial-mark: 24053K(40092K)] 35882K(158108K),
0.0021240 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1861.133: [CMS-concurrent-mark-start]
1861.149: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1861.149: [CMS-concurrent-preclean-start]
1861.150: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1861.150: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1866.220:
[CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
user=0.12 sys=0.00, real=5.07 secs]
1866.220: [GC[YG occupancy: 12388 K (118016 K)]1866.220: [Rescan
(parallel) , 0.0027090 secs]1866.223: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 36441K(158108K), 0.0028070 secs]
[Times: user=0.02 sys=0.00, real=0.01 secs]
1866.223: [CMS-concurrent-sweep-start]
1866.227: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1866.227: [CMS-concurrent-reset-start]
1866.236: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1868.236: [GC [1 CMS-initial-mark: 24053K(40092K)] 36569K(158108K),
0.0023650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1868.239: [CMS-concurrent-mark-start]
1868.256: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1868.256: [CMS-concurrent-preclean-start]
1868.257: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1868.257: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1873.267:
[CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
user=0.13 sys=0.00, real=5.01 secs]
1873.268: [GC[YG occupancy: 12837 K (118016 K)]1873.268: [Rescan
(parallel) , 0.0018720 secs]1873.270: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 36890K(158108K), 0.0019730 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1873.270: [CMS-concurrent-sweep-start]
1873.273: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1873.273: [CMS-concurrent-reset-start]
1873.282: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1875.283: [GC [1 CMS-initial-mark: 24053K(40092K)] 37018K(158108K),
0.0024410 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1875.285: [CMS-concurrent-mark-start]
1875.302: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1875.302: [CMS-concurrent-preclean-start]
1875.302: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1875.303: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1880.318:
[CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.02 secs]
1880.318: [GC[YG occupancy: 13286 K (118016 K)]1880.318: [Rescan
(parallel) , 0.0023860 secs]1880.321: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 24053K(40092K)] 37339K(158108K), 0.0024910 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1880.321: [CMS-concurrent-sweep-start]
1880.324: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1880.324: [CMS-concurrent-reset-start]
1880.333: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
1882.334: [GC [1 CMS-initial-mark: 24053K(40092K)] 37467K(158108K),
0.0024090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1882.336: [CMS-concurrent-mark-start]
1882.352: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1882.352: [CMS-concurrent-preclean-start]
1882.353: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1882.353: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1887.368:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.02 secs]
1887.368: [GC[YG occupancy: 13739 K (118016 K)]1887.368: [Rescan
(parallel) , 0.0022370 secs]1887.370: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 37792K(158108K), 0.0023360 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1887.371: [CMS-concurrent-sweep-start]
1887.374: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1887.374: [CMS-concurrent-reset-start]
1887.383: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1889.384: [GC [1 CMS-initial-mark: 24053K(40092K)] 37920K(158108K),
0.0024690 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1889.386: [CMS-concurrent-mark-start]
1889.404: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1889.404: [CMS-concurrent-preclean-start]
1889.405: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.01 sys=0.00, real=0.00 secs]
1889.405: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1894.488:
[CMS-concurrent-abortable-preclean: 0.112/5.083 secs] [Times:
user=0.11 sys=0.00, real=5.08 secs]
1894.488: [GC[YG occupancy: 14241 K (118016 K)]1894.488: [Rescan
(parallel) , 0.0020670 secs]1894.490: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 38294K(158108K), 0.0021630 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1894.490: [CMS-concurrent-sweep-start]
1894.494: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1894.494: [CMS-concurrent-reset-start]
1894.503: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1896.503: [GC [1 CMS-initial-mark: 24053K(40092K)] 38422K(158108K),
0.0025430 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1896.506: [CMS-concurrent-mark-start]
1896.524: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1896.524: [CMS-concurrent-preclean-start]
1896.525: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1896.525: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1901.540:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
1901.540: [GC[YG occupancy: 14690 K (118016 K)]1901.540: [Rescan
(parallel) , 0.0014810 secs]1901.542: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 38743K(158108K), 0.0015820 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1901.542: [CMS-concurrent-sweep-start]
1901.545: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1901.545: [CMS-concurrent-reset-start]
1901.555: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1903.555: [GC [1 CMS-initial-mark: 24053K(40092K)] 38871K(158108K),
0.0025990 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
1903.558: [CMS-concurrent-mark-start]
1903.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1903.575: [CMS-concurrent-preclean-start]
1903.576: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1903.576: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1908.586:
[CMS-concurrent-abortable-preclean: 0.105/5.010 secs] [Times:
user=0.10 sys=0.00, real=5.01 secs]
1908.587: [GC[YG occupancy: 15207 K (118016 K)]1908.587: [Rescan
(parallel) , 0.0026240 secs]1908.589: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 39260K(158108K), 0.0027260 secs]
[Times: user=0.01 sys=0.00, real=0.00 secs]
1908.589: [CMS-concurrent-sweep-start]
1908.593: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1908.593: [CMS-concurrent-reset-start]
1908.602: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1910.602: [GC [1 CMS-initial-mark: 24053K(40092K)] 39324K(158108K),
0.0025610 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1910.605: [CMS-concurrent-mark-start]
1910.621: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1910.621: [CMS-concurrent-preclean-start]
1910.622: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.01 sys=0.00, real=0.00 secs]
1910.622: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1915.684:
[CMS-concurrent-abortable-preclean: 0.112/5.062 secs] [Times:
user=0.11 sys=0.00, real=5.07 secs]
1915.684: [GC[YG occupancy: 15592 K (118016 K)]1915.684: [Rescan
(parallel) , 0.0023940 secs]1915.687: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 24053K(40092K)] 39645K(158108K), 0.0025050 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1915.687: [CMS-concurrent-sweep-start]
1915.690: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1915.690: [CMS-concurrent-reset-start]
1915.699: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1917.700: [GC [1 CMS-initial-mark: 24053K(40092K)] 39838K(158108K),
0.0025010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1917.702: [CMS-concurrent-mark-start]
1917.719: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1917.719: [CMS-concurrent-preclean-start]
1917.719: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1917.719: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1922.735:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.01, real=5.02 secs]
1922.735: [GC[YG occupancy: 16198 K (118016 K)]1922.735: [Rescan
(parallel) , 0.0028750 secs]1922.738: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 40251K(158108K), 0.0029760 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1922.738: [CMS-concurrent-sweep-start]
1922.741: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1922.741: [CMS-concurrent-reset-start]
1922.751: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1922.957: [GC [1 CMS-initial-mark: 24053K(40092K)] 40324K(158108K),
0.0027380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1922.960: [CMS-concurrent-mark-start]
1922.978: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1922.978: [CMS-concurrent-preclean-start]
1922.979: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1922.979: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1927.994:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.02 secs]
1927.995: [GC[YG occupancy: 16645 K (118016 K)]1927.995: [Rescan
(parallel) , 0.0013210 secs]1927.996: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 40698K(158108K), 0.0017610 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1927.996: [CMS-concurrent-sweep-start]
1928.000: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1928.000: [CMS-concurrent-reset-start]
1928.009: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1930.009: [GC [1 CMS-initial-mark: 24053K(40092K)] 40826K(158108K),
0.0028310 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1930.012: [CMS-concurrent-mark-start]
1930.028: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1930.028: [CMS-concurrent-preclean-start]
1930.029: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1930.029: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1935.044:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
1935.045: [GC[YG occupancy: 17098 K (118016 K)]1935.045: [Rescan
(parallel) , 0.0015440 secs]1935.046: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 24053K(40092K)] 41151K(158108K), 0.0016490 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1935.046: [CMS-concurrent-sweep-start]
1935.050: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1935.050: [CMS-concurrent-reset-start]
1935.059: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1937.059: [GC [1 CMS-initial-mark: 24053K(40092K)] 41279K(158108K),
0.0028290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1937.062: [CMS-concurrent-mark-start]
1937.079: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1937.079: [CMS-concurrent-preclean-start]
1937.079: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1937.079: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1942.095:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.01, real=5.02 secs]
1942.095: [GC[YG occupancy: 17547 K (118016 K)]1942.095: [Rescan
(parallel) , 0.0030270 secs]1942.098: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 41600K(158108K), 0.0031250 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1942.098: [CMS-concurrent-sweep-start]
1942.101: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1942.101: [CMS-concurrent-reset-start]
1942.111: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1944.111: [GC [1 CMS-initial-mark: 24053K(40092K)] 41728K(158108K),
0.0028080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1944.114: [CMS-concurrent-mark-start]
1944.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1944.130: [CMS-concurrent-preclean-start]
1944.131: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1944.131: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1949.146:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.02 secs]
1949.146: [GC[YG occupancy: 17996 K (118016 K)]1949.146: [Rescan
(parallel) , 0.0028800 secs]1949.149: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 24053K(40092K)] 42049K(158108K), 0.0029810 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1949.149: [CMS-concurrent-sweep-start]
1949.152: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1949.152: [CMS-concurrent-reset-start]
1949.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1951.162: [GC [1 CMS-initial-mark: 24053K(40092K)] 42177K(158108K),
0.0028760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1951.165: [CMS-concurrent-mark-start]
1951.184: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1951.184: [CMS-concurrent-preclean-start]
1951.184: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1951.184: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1956.244:
[CMS-concurrent-abortable-preclean: 0.112/5.059 secs] [Times:
user=0.11 sys=0.01, real=5.05 secs]
1956.244: [GC[YG occupancy: 18498 K (118016 K)]1956.244: [Rescan
(parallel) , 0.0019760 secs]1956.246: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 42551K(158108K), 0.0020750 secs]
[Times: user=0.03 sys=0.00, real=0.00 secs]
1956.246: [CMS-concurrent-sweep-start]
1956.249: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1956.249: [CMS-concurrent-reset-start]
1956.259: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1958.259: [GC [1 CMS-initial-mark: 24053K(40092K)] 42747K(158108K),
0.0029160 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1958.262: [CMS-concurrent-mark-start]
1958.279: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1958.279: [CMS-concurrent-preclean-start]
1958.279: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1958.279: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1963.295:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
1963.295: [GC[YG occupancy: 18951 K (118016 K)]1963.295: [Rescan
(parallel) , 0.0020140 secs]1963.297: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 43004K(158108K), 0.0021100 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1963.297: [CMS-concurrent-sweep-start]
1963.300: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1963.300: [CMS-concurrent-reset-start]
1963.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1965.310: [GC [1 CMS-initial-mark: 24053K(40092K)] 43132K(158108K),
0.0029420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1965.313: [CMS-concurrent-mark-start]
1965.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1965.329: [CMS-concurrent-preclean-start]
1965.330: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1965.330: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1970.345:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.02 secs]
1970.345: [GC[YG occupancy: 19400 K (118016 K)]1970.345: [Rescan
(parallel) , 0.0031610 secs]1970.349: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 43453K(158108K), 0.0032580 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1970.349: [CMS-concurrent-sweep-start]
1970.352: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1970.352: [CMS-concurrent-reset-start]
1970.361: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1972.362: [GC [1 CMS-initial-mark: 24053K(40092K)] 43581K(158108K),
0.0029960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1972.365: [CMS-concurrent-mark-start]
1972.381: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
1972.381: [CMS-concurrent-preclean-start]
1972.382: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1972.382: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1977.397:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.02 secs]
1977.398: [GC[YG occupancy: 19849 K (118016 K)]1977.398: [Rescan
(parallel) , 0.0018110 secs]1977.399: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 43902K(158108K), 0.0019100 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1977.400: [CMS-concurrent-sweep-start]
1977.403: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1977.403: [CMS-concurrent-reset-start]
1977.412: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1979.413: [GC [1 CMS-initial-mark: 24053K(40092K)] 44031K(158108K),
0.0030240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
1979.416: [CMS-concurrent-mark-start]
1979.434: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
sys=0.00, real=0.02 secs]
1979.434: [CMS-concurrent-preclean-start]
1979.434: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1979.434: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1984.511:
[CMS-concurrent-abortable-preclean: 0.112/5.077 secs] [Times:
user=0.12 sys=0.00, real=5.07 secs]
1984.511: [GC[YG occupancy: 20556 K (118016 K)]1984.511: [Rescan
(parallel) , 0.0032740 secs]1984.514: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 44609K(158108K), 0.0033720 secs]
[Times: user=0.03 sys=0.00, real=0.01 secs]
1984.515: [CMS-concurrent-sweep-start]
1984.518: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1984.518: [CMS-concurrent-reset-start]
1984.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1986.528: [GC [1 CMS-initial-mark: 24053K(40092K)] 44737K(158108K),
0.0032890 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1986.531: [CMS-concurrent-mark-start]
1986.548: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
1986.548: [CMS-concurrent-preclean-start]
1986.548: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1986.548: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1991.564:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.02 secs]
1991.564: [GC[YG occupancy: 21005 K (118016 K)]1991.564: [Rescan
(parallel) , 0.0022540 secs]1991.566: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 45058K(158108K), 0.0023650 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
1991.566: [CMS-concurrent-sweep-start]
1991.570: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
1991.570: [CMS-concurrent-reset-start]
1991.579: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
1993.579: [GC [1 CMS-initial-mark: 24053K(40092K)] 45187K(158108K),
0.0032480 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
1993.583: [CMS-concurrent-mark-start]
1993.599: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
1993.599: [CMS-concurrent-preclean-start]
1993.600: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
1993.600: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 1998.688:
[CMS-concurrent-abortable-preclean: 0.112/5.089 secs] [Times:
user=0.10 sys=0.01, real=5.09 secs]
1998.689: [GC[YG occupancy: 21454 K (118016 K)]1998.689: [Rescan
(parallel) , 0.0025510 secs]1998.691: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 45507K(158108K), 0.0026500 secs]
[Times: user=0.03 sys=0.00, real=0.00 secs]
1998.691: [CMS-concurrent-sweep-start]
1998.695: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
1998.695: [CMS-concurrent-reset-start]
1998.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
2000.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 45636K(158108K),
0.0033350 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2000.708: [CMS-concurrent-mark-start]
2000.726: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2000.726: [CMS-concurrent-preclean-start]
2000.726: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2000.726: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2005.742:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.01 secs]
2005.742: [GC[YG occupancy: 21968 K (118016 K)]2005.742: [Rescan
(parallel) , 0.0027300 secs]2005.745: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 46021K(158108K), 0.0028560 secs]
[Times: user=0.02 sys=0.01, real=0.01 secs]
2005.745: [CMS-concurrent-sweep-start]
2005.748: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2005.748: [CMS-concurrent-reset-start]
2005.757: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.01, real=0.01 secs]
2007.758: [GC [1 CMS-initial-mark: 24053K(40092K)] 46217K(158108K),
0.0033290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2007.761: [CMS-concurrent-mark-start]
2007.778: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
2007.778: [CMS-concurrent-preclean-start]
2007.778: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2007.778: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2012.794:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.02 secs]
2012.794: [GC[YG occupancy: 22421 K (118016 K)]2012.794: [Rescan
(parallel) , 0.0036890 secs]2012.798: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 46474K(158108K), 0.0037910 secs]
[Times: user=0.02 sys=0.01, real=0.00 secs]
2012.798: [CMS-concurrent-sweep-start]
2012.801: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2012.801: [CMS-concurrent-reset-start]
2012.810: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2012.980: [GC [1 CMS-initial-mark: 24053K(40092K)] 46547K(158108K),
0.0033990 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2012.984: [CMS-concurrent-mark-start]
2013.004: [CMS-concurrent-mark: 0.019/0.020 secs] [Times: user=0.06
sys=0.01, real=0.02 secs]
2013.004: [CMS-concurrent-preclean-start]
2013.005: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2013.005: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2018.020:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.01 secs]
2018.020: [GC[YG occupancy: 22867 K (118016 K)]2018.020: [Rescan
(parallel) , 0.0025180 secs]2018.023: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 46920K(158108K), 0.0026190 secs]
[Times: user=0.03 sys=0.00, real=0.00 secs]
2018.023: [CMS-concurrent-sweep-start]
2018.026: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
2018.026: [CMS-concurrent-reset-start]
2018.036: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
2020.036: [GC [1 CMS-initial-mark: 24053K(40092K)] 47048K(158108K),
0.0034020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2020.039: [CMS-concurrent-mark-start]
2020.057: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2020.057: [CMS-concurrent-preclean-start]
2020.058: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2020.058: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2025.073:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
2025.073: [GC[YG occupancy: 23316 K (118016 K)]2025.073: [Rescan
(parallel) , 0.0020110 secs]2025.075: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 47369K(158108K), 0.0021080 secs]
[Times: user=0.02 sys=0.00, real=0.00 secs]
2025.075: [CMS-concurrent-sweep-start]
2025.079: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2025.079: [CMS-concurrent-reset-start]
2025.088: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2027.088: [GC [1 CMS-initial-mark: 24053K(40092K)] 47498K(158108K),
0.0034100 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2027.092: [CMS-concurrent-mark-start]
2027.108: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2027.108: [CMS-concurrent-preclean-start]
2027.109: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2027.109: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2032.120:
[CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
user=0.10 sys=0.00, real=5.01 secs]
2032.120: [GC[YG occupancy: 23765 K (118016 K)]2032.120: [Rescan
(parallel) , 0.0025970 secs]2032.123: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 47818K(158108K), 0.0026940 secs]
[Times: user=0.03 sys=0.00, real=0.00 secs]
2032.123: [CMS-concurrent-sweep-start]
2032.126: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
2032.126: [CMS-concurrent-reset-start]
2032.135: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
2034.136: [GC [1 CMS-initial-mark: 24053K(40092K)] 47951K(158108K),
0.0034720 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2034.139: [CMS-concurrent-mark-start]
2034.156: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2034.156: [CMS-concurrent-preclean-start]
2034.156: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2034.156: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2039.171:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
2039.172: [GC[YG occupancy: 24218 K (118016 K)]2039.172: [Rescan
(parallel) , 0.0038590 secs]2039.176: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 48271K(158108K), 0.0039560 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
2039.176: [CMS-concurrent-sweep-start]
2039.179: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2039.179: [CMS-concurrent-reset-start]
2039.188: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2041.188: [GC [1 CMS-initial-mark: 24053K(40092K)] 48400K(158108K),
0.0035110 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2041.192: [CMS-concurrent-mark-start]
2041.209: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
2041.209: [CMS-concurrent-preclean-start]
2041.209: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2041.209: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2046.268:
[CMS-concurrent-abortable-preclean: 0.108/5.058 secs] [Times:
user=0.12 sys=0.00, real=5.06 secs]
2046.268: [GC[YG occupancy: 24813 K (118016 K)]2046.268: [Rescan
(parallel) , 0.0042050 secs]2046.272: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 48866K(158108K), 0.0043070 secs]
[Times: user=0.04 sys=0.00, real=0.00 secs]
2046.272: [CMS-concurrent-sweep-start]
2046.275: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
2046.275: [CMS-concurrent-reset-start]
2046.285: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2048.285: [GC [1 CMS-initial-mark: 24053K(40092K)] 48994K(158108K),
0.0037700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2048.289: [CMS-concurrent-mark-start]
2048.307: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2048.307: [CMS-concurrent-preclean-start]
2048.307: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2048.307: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2053.323:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
2053.323: [GC[YG occupancy: 25262 K (118016 K)]2053.323: [Rescan
(parallel) , 0.0030780 secs]2053.326: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 49315K(158108K), 0.0031760 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
2053.326: [CMS-concurrent-sweep-start]
2053.329: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2053.329: [CMS-concurrent-reset-start]
2053.338: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2055.339: [GC [1 CMS-initial-mark: 24053K(40092K)] 49444K(158108K),
0.0037730 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2055.343: [CMS-concurrent-mark-start]
2055.359: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
2055.359: [CMS-concurrent-preclean-start]
2055.360: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2055.360: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2060.373:
[CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
2060.373: [GC[YG occupancy: 25715 K (118016 K)]2060.373: [Rescan
(parallel) , 0.0037090 secs]2060.377: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 49768K(158108K), 0.0038110 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
2060.377: [CMS-concurrent-sweep-start]
2060.380: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2060.380: [CMS-concurrent-reset-start]
2060.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2062.390: [GC [1 CMS-initial-mark: 24053K(40092K)] 49897K(158108K),
0.0037860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2062.394: [CMS-concurrent-mark-start]
2062.410: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2062.410: [CMS-concurrent-preclean-start]
2062.411: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2062.411: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2067.426:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.02 secs]
2067.427: [GC[YG occupancy: 26231 K (118016 K)]2067.427: [Rescan
(parallel) , 0.0031980 secs]2067.430: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 50284K(158108K), 0.0033100 secs]
[Times: user=0.04 sys=0.00, real=0.00 secs]
2067.430: [CMS-concurrent-sweep-start]
2067.433: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
2067.433: [CMS-concurrent-reset-start]
2067.443: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2069.443: [GC [1 CMS-initial-mark: 24053K(40092K)] 50412K(158108K),
0.0038060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2069.447: [CMS-concurrent-mark-start]
2069.465: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2069.465: [CMS-concurrent-preclean-start]
2069.465: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2069.465: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2074.535:
[CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
user=0.12 sys=0.00, real=5.06 secs]
2074.535: [GC[YG occupancy: 26749 K (118016 K)]2074.535: [Rescan
(parallel) , 0.0040450 secs]2074.539: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 50802K(158108K), 0.0041460 secs]
[Times: user=0.04 sys=0.00, real=0.00 secs]
2074.539: [CMS-concurrent-sweep-start]
2074.543: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2074.543: [CMS-concurrent-reset-start]
2074.552: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
2076.552: [GC [1 CMS-initial-mark: 24053K(40092K)] 50930K(158108K),
0.0038960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2076.556: [CMS-concurrent-mark-start]
2076.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2076.575: [CMS-concurrent-preclean-start]
2076.575: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2076.575: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2081.590:
[CMS-concurrent-abortable-preclean: 0.109/5.014 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
2081.590: [GC[YG occupancy: 27198 K (118016 K)]2081.590: [Rescan
(parallel) , 0.0042420 secs]2081.594: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 51251K(158108K), 0.0043450 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
2081.594: [CMS-concurrent-sweep-start]
2081.597: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2081.597: [CMS-concurrent-reset-start]
2081.607: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2083.607: [GC [1 CMS-initial-mark: 24053K(40092K)] 51447K(158108K),
0.0038630 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2083.611: [CMS-concurrent-mark-start]
2083.628: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2083.628: [CMS-concurrent-preclean-start]
2083.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2083.628: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2088.642:
[CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
2088.642: [GC[YG occupancy: 27651 K (118016 K)]2088.642: [Rescan
(parallel) , 0.0031520 secs]2088.645: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 51704K(158108K), 0.0032520 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
2088.645: [CMS-concurrent-sweep-start]
2088.649: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2088.649: [CMS-concurrent-reset-start]
2088.658: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2090.658: [GC [1 CMS-initial-mark: 24053K(40092K)] 51832K(158108K),
0.0039130 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2090.662: [CMS-concurrent-mark-start]
2090.678: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
2090.678: [CMS-concurrent-preclean-start]
2090.679: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2090.679: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2095.690:
[CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
2095.690: [GC[YG occupancy: 28100 K (118016 K)]2095.690: [Rescan
(parallel) , 0.0024460 secs]2095.693: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 52153K(158108K), 0.0025460 secs]
[Times: user=0.03 sys=0.00, real=0.00 secs]
2095.693: [CMS-concurrent-sweep-start]
2095.696: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
2095.696: [CMS-concurrent-reset-start]
2095.705: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
2096.616: [GC [1 CMS-initial-mark: 24053K(40092K)] 53289K(158108K),
0.0039340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2096.620: [CMS-concurrent-mark-start]
2096.637: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
2096.637: [CMS-concurrent-preclean-start]
2096.638: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2096.638: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2101.654:
[CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.01 secs]
2101.654: [GC[YG occupancy: 29557 K (118016 K)]2101.654: [Rescan
(parallel) , 0.0034020 secs]2101.657: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 53610K(158108K), 0.0035000 secs]
[Times: user=0.04 sys=0.00, real=0.00 secs]
2101.657: [CMS-concurrent-sweep-start]
2101.661: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2101.661: [CMS-concurrent-reset-start]
2101.670: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2103.004: [GC [1 CMS-initial-mark: 24053K(40092K)] 53997K(158108K),
0.0042590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2103.009: [CMS-concurrent-mark-start]
2103.027: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2103.027: [CMS-concurrent-preclean-start]
2103.028: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2103.028: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2108.043:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.10 sys=0.01, real=5.02 secs]
2108.043: [GC[YG occupancy: 30385 K (118016 K)]2108.044: [Rescan
(parallel) , 0.0048950 secs]2108.048: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 54438K(158108K), 0.0049930 secs]
[Times: user=0.04 sys=0.00, real=0.00 secs]
2108.049: [CMS-concurrent-sweep-start]
2108.052: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2108.052: [CMS-concurrent-reset-start]
2108.061: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2110.062: [GC [1 CMS-initial-mark: 24053K(40092K)] 54502K(158108K),
0.0042120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
2110.066: [CMS-concurrent-mark-start]
2110.084: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2110.084: [CMS-concurrent-preclean-start]
2110.085: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2110.085: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2115.100:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
2115.101: [GC[YG occupancy: 30770 K (118016 K)]2115.101: [Rescan
(parallel) , 0.0049040 secs]2115.106: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 54823K(158108K), 0.0050080 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
2115.106: [CMS-concurrent-sweep-start]
2115.109: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2115.109: [CMS-concurrent-reset-start]
2115.118: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2117.118: [GC [1 CMS-initial-mark: 24053K(40092K)] 54952K(158108K),
0.0042490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2117.123: [CMS-concurrent-mark-start]
2117.139: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
2117.139: [CMS-concurrent-preclean-start]
2117.140: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2117.140: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2122.155:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.02 secs]
2122.155: [GC[YG occupancy: 31219 K (118016 K)]2122.155: [Rescan
(parallel) , 0.0036460 secs]2122.159: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 55272K(158108K), 0.0037440 secs]
[Times: user=0.04 sys=0.00, real=0.00 secs]
2122.159: [CMS-concurrent-sweep-start]
2122.162: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2122.162: [CMS-concurrent-reset-start]
2122.172: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2124.172: [GC [1 CMS-initial-mark: 24053K(40092K)] 55401K(158108K),
0.0043010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2124.176: [CMS-concurrent-mark-start]
2124.195: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2124.195: [CMS-concurrent-preclean-start]
2124.195: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2124.195: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2129.211:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.12 sys=0.00, real=5.01 secs]
2129.211: [GC[YG occupancy: 31669 K (118016 K)]2129.211: [Rescan
(parallel) , 0.0049870 secs]2129.216: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 55722K(158108K), 0.0050860 secs]
[Times: user=0.04 sys=0.00, real=0.01 secs]
2129.216: [CMS-concurrent-sweep-start]
2129.219: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
2129.219: [CMS-concurrent-reset-start]
2129.228: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2131.229: [GC [1 CMS-initial-mark: 24053K(40092K)] 55850K(158108K),
0.0042340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2131.233: [CMS-concurrent-mark-start]
2131.249: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2131.249: [CMS-concurrent-preclean-start]
2131.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2131.249: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2136.292:
[CMS-concurrent-abortable-preclean: 0.108/5.042 secs] [Times:
user=0.11 sys=0.00, real=5.04 secs]
2136.292: [GC[YG occupancy: 32174 K (118016 K)]2136.292: [Rescan
(parallel) , 0.0037250 secs]2136.296: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 56227K(158108K), 0.0038250 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
2136.296: [CMS-concurrent-sweep-start]
2136.299: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2136.299: [CMS-concurrent-reset-start]
2136.308: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2138.309: [GC [1 CMS-initial-mark: 24053K(40092K)] 56356K(158108K),
0.0043040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2138.313: [CMS-concurrent-mark-start]
2138.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
sys=0.01, real=0.02 secs]
2138.329: [CMS-concurrent-preclean-start]
2138.329: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2138.329: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2143.341:
[CMS-concurrent-abortable-preclean: 0.106/5.011 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
2143.341: [GC[YG occupancy: 32623 K (118016 K)]2143.341: [Rescan
(parallel) , 0.0038660 secs]2143.345: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 56676K(158108K), 0.0039760 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
2143.345: [CMS-concurrent-sweep-start]
2143.349: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2143.349: [CMS-concurrent-reset-start]
2143.358: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2145.358: [GC [1 CMS-initial-mark: 24053K(40092K)] 56805K(158108K),
0.0043390 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2145.362: [CMS-concurrent-mark-start]
2145.379: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
2145.379: [CMS-concurrent-preclean-start]
2145.379: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2145.379: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2150.393:
[CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
2150.393: [GC[YG occupancy: 33073 K (118016 K)]2150.393: [Rescan
(parallel) , 0.0038190 secs]2150.397: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 57126K(158108K), 0.0039210 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
2150.397: [CMS-concurrent-sweep-start]
2150.400: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2150.400: [CMS-concurrent-reset-start]
2150.410: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2152.410: [GC [1 CMS-initial-mark: 24053K(40092K)] 57254K(158108K),
0.0044080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2152.415: [CMS-concurrent-mark-start]
2152.431: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
2152.431: [CMS-concurrent-preclean-start]
2152.432: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2152.432: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2157.447:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.01, real=5.02 secs]
2157.447: [GC[YG occupancy: 33522 K (118016 K)]2157.447: [Rescan
(parallel) , 0.0038130 secs]2157.451: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 57575K(158108K), 0.0039160 secs]
[Times: user=0.04 sys=0.00, real=0.00 secs]
2157.451: [CMS-concurrent-sweep-start]
2157.454: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
2157.454: [CMS-concurrent-reset-start]
2157.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
2159.464: [GC [1 CMS-initial-mark: 24053K(40092K)] 57707K(158108K),
0.0045170 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2159.469: [CMS-concurrent-mark-start]
2159.483: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
sys=0.00, real=0.01 secs]
2159.483: [CMS-concurrent-preclean-start]
2159.483: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2159.483: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2164.491:
[CMS-concurrent-abortable-preclean: 0.111/5.007 secs] [Times:
user=0.12 sys=0.00, real=5.01 secs]
2164.491: [GC[YG occupancy: 34293 K (118016 K)]2164.491: [Rescan
(parallel) , 0.0052070 secs]2164.496: [weak refs processing, 0.0000120
secs] [1 CMS-remark: 24053K(40092K)] 58347K(158108K), 0.0053130 secs]
[Times: user=0.06 sys=0.00, real=0.01 secs]
2164.496: [CMS-concurrent-sweep-start]
2164.500: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2164.500: [CMS-concurrent-reset-start]
2164.509: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.01, real=0.01 secs]
2166.509: [GC [1 CMS-initial-mark: 24053K(40092K)] 58475K(158108K),
0.0045900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2166.514: [CMS-concurrent-mark-start]
2166.533: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
sys=0.00, real=0.02 secs]
2166.533: [CMS-concurrent-preclean-start]
2166.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2166.533: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2171.549:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.02 secs]
2171.549: [GC[YG occupancy: 34743 K (118016 K)]2171.549: [Rescan
(parallel) , 0.0052200 secs]2171.554: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 58796K(158108K), 0.0053210 secs]
[Times: user=0.05 sys=0.00, real=0.01 secs]
2171.554: [CMS-concurrent-sweep-start]
2171.558: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2171.558: [CMS-concurrent-reset-start]
2171.567: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2173.567: [GC [1 CMS-initial-mark: 24053K(40092K)] 58924K(158108K),
0.0046700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
2173.572: [CMS-concurrent-mark-start]
2173.588: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
sys=0.00, real=0.02 secs]
2173.588: [CMS-concurrent-preclean-start]
2173.589: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2173.589: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2178.604:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.10 sys=0.01, real=5.02 secs]
2178.605: [GC[YG occupancy: 35192 K (118016 K)]2178.605: [Rescan
(parallel) , 0.0041460 secs]2178.609: [weak refs processing, 0.0000110
secs] [1 CMS-remark: 24053K(40092K)] 59245K(158108K), 0.0042450 secs]
[Times: user=0.04 sys=0.00, real=0.00 secs]
2178.609: [CMS-concurrent-sweep-start]
2178.612: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
sys=0.00, real=0.00 secs]
2178.612: [CMS-concurrent-reset-start]
2178.622: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
sys=0.00, real=0.01 secs]
2180.622: [GC [1 CMS-initial-mark: 24053K(40092K)] 59373K(158108K),
0.0047200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
2180.627: [CMS-concurrent-mark-start]
2180.645: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
sys=0.00, real=0.02 secs]
2180.645: [CMS-concurrent-preclean-start]
2180.645: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
user=0.00 sys=0.00, real=0.00 secs]
2180.645: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2185.661:
[CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
user=0.11 sys=0.00, real=5.01 secs]
2185.661: [GC[YG occupancy: 35645 K (118016 K)]2185.661: [Rescan
(parallel) , 0.0050730 secs]2185.666: [weak refs processing, 0.0000100
secs] [1 CMS-remark: 24053K(40092K)] 59698K(158108K), 0.0051720 secs]
[Times: user=0.04 sys=0.01, real=0.01 secs]
2185.666: [CMS-concurrent-sweep-start]
2185.670: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
sys=0.00, real=0.00 secs]
2185.670: [CMS-concurrent-reset-start]
2185.679: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
sys=0.00, real=0.01 secs]
2187.679: [GC [1 CMS-initial-mark: 24053K(40092K)] 59826K(158108K),
0.0047350 secs]

--
gregross:)

Re: long garbage collecting pause

Posted by Marcos Ortiz <ml...@uci.cu>.
El 02/10/2012 11:32, Greg Ross escribió:
> Thanks for the suggestions.
>
> I was attempting to tune the GC via mapred.child.java.opts in the job's
> Oozie config instead of in hbase-env.sh. I think this is why my efforts
> were to no avail. It was likely having no effect on the read/write
> performance. Is there any way of specifying job-specific HBase parameters
> instead of globally setting them in hbase-env.sh?
>
> The cluster has 175 nodes. Each with 48GB of RAM. The overall data input
> size is 7TB and I pre-split the table into initially 30 regions, then 100
> in another attempt. Each job runs upon 700GB chunks of the data. I used
> RegionSplitter to create and condition the table and therefore there's
> currently no compression. I'm thinking to recreate the table and 'alter' it
> with LZO compression before attempting the jobs again.
There are many points that you can do for HBase performance tuning.
In the Lars George´s book "HBase: The Definitive Guide", the Chapter 11 
is dedicated
to this tricky topic, and in the HBase book, there are good points too:

http://hbase.apache.org/book.html#perf.reading

Thanks to Doug for the link.


>
> Cheers.
>
> Greg
>
>
>
> On Tue, Oct 2, 2012 at 7:20 AM, Damien Hardy <dh...@viadeoteam.com> wrote:
>
>> Hello
>>
>> 2012/10/2 Marcos Ortiz <ml...@uci.cu>
>>
>>> Another thing that I´m seeing is that one of your main process is
>>> compaction,
>>> so you can optimize all this inceasing the size of your regions (by
>>> defaulf the size of a
>>> region is 256 MB), but you will have in your hands a "split/compaction
>>> storm" like
>>> Lars called them on his book.
>>
>> Actually it seams like the default value for hbase.hregion.max.filesize in
>> 0.92 was increased up to 1Go.
>> http://hbase.apache.org/book/upgrade0.92.html#d2051e266
>>
>> But you can set it to more (max is 20Go) and split manually.
>> http://hbase.apache.org/book/important_configurations.html#bigger.regions
>>
>> Cheers,
>>
>> --
>> Dam
>>
>
>

-- 
Marcos Ortiz Valmaseda,
Data Engineer && Senior System Administrator at UCI
Blog: http://marcosluis2186.posterous.com
Linkedin: http://www.linkedin.com/in/marcosluis2186
Twitter: @marcosluis2186




10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci

Re: long garbage collecting pause

Posted by Greg Ross <gr...@ngmoco.com>.
Thanks for the suggestions.

I was attempting to tune the GC via mapred.child.java.opts in the job's
Oozie config instead of in hbase-env.sh. I think this is why my efforts
were to no avail. It was likely having no effect on the read/write
performance. Is there any way of specifying job-specific HBase parameters
instead of globally setting them in hbase-env.sh?

The cluster has 175 nodes. Each with 48GB of RAM. The overall data input
size is 7TB and I pre-split the table into initially 30 regions, then 100
in another attempt. Each job runs upon 700GB chunks of the data. I used
RegionSplitter to create and condition the table and therefore there's
currently no compression. I'm thinking to recreate the table and 'alter' it
with LZO compression before attempting the jobs again.

Cheers.

Greg



On Tue, Oct 2, 2012 at 7:20 AM, Damien Hardy <dh...@viadeoteam.com> wrote:

> Hello
>
> 2012/10/2 Marcos Ortiz <ml...@uci.cu>
>
>>
>> Another thing that I´m seeing is that one of your main process is
>> compaction,
>> so you can optimize all this inceasing the size of your regions (by
>> defaulf the size of a
>> region is 256 MB), but you will have in your hands a "split/compaction
>> storm" like
>> Lars called them on his book.
>
>
> Actually it seams like the default value for hbase.hregion.max.filesize in
> 0.92 was increased up to 1Go.
> http://hbase.apache.org/book/upgrade0.92.html#d2051e266
>
> But you can set it to more (max is 20Go) and split manually.
> http://hbase.apache.org/book/important_configurations.html#bigger.regions
>
> Cheers,
>
> --
> Dam
>



-- 
*gregross:)*

Re: long garbage collecting pause

Posted by Michael Segel <mi...@hotmail.com>.
You really don't want to go to 20GB.

Without knowing the number of regions... going beyond 1-2 GB may cause more headaches than its worth. 

Sorry, but I tend to be very cautious when it comes to tuning. 

-Mike

On Oct 2, 2012, at 9:20 AM, Damien Hardy <dh...@viadeoteam.com> wrote:

> Hello
> 
> 2012/10/2 Marcos Ortiz <ml...@uci.cu>
> 
>> 
>> Another thing that I´m seeing is that one of your main process is
>> compaction,
>> so you can optimize all this inceasing the size of your regions (by
>> defaulf the size of a
>> region is 256 MB), but you will have in your hands a "split/compaction
>> storm" like
>> Lars called them on his book.
> 
> 
> Actually it seams like the default value for hbase.hregion.max.filesize in
> 0.92 was increased up to 1Go.
> http://hbase.apache.org/book/upgrade0.92.html#d2051e266
> 
> But you can set it to more (max is 20Go) and split manually.
> http://hbase.apache.org/book/important_configurations.html#bigger.regions
> 
> Cheers,
> 
> -- 
> Dam


Re: long garbage collecting pause

Posted by Damien Hardy <dh...@viadeoteam.com>.
Hello

2012/10/2 Marcos Ortiz <ml...@uci.cu>

>
> Another thing that I´m seeing is that one of your main process is
> compaction,
> so you can optimize all this inceasing the size of your regions (by
> defaulf the size of a
> region is 256 MB), but you will have in your hands a "split/compaction
> storm" like
> Lars called them on his book.


Actually it seams like the default value for hbase.hregion.max.filesize in
0.92 was increased up to 1Go.
http://hbase.apache.org/book/upgrade0.92.html#d2051e266

But you can set it to more (max is 20Go) and split manually.
http://hbase.apache.org/book/important_configurations.html#bigger.regions

Cheers,

-- 
Dam

Re: long garbage collecting pause

Posted by Marcos Ortiz <ml...@uci.cu>.
El 01/10/2012 16:35, Greg Ross escribió:
> Hi,
>
> I'm having difficulty with a mapreduce job that has reducers that read
> from and write to HBase, version 0.92.1, r1298924. Row sizes vary
> greatly. As do the number of cells, although the number of cells is
> typically numbered in the tens, at most. The max cell size is 1MB.
0.94.1 is out with a lot of improvements related to performance. It 
would better
if you use tis version.
>
> I see the following in the logs followed by the region server promptly
> shutting down:
>
> 2012-10-01 19:08:47,858 [regionserver60020] WARN
> org.apache.hadoop.hbase.util.Sleeper: We slept 28970ms instead of
> 3000ms, this is likely due to a long garbage collecting pause and it's
> usually bad, see
> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
>
> The full logs, including GC are below.
>
> Although new to HBase, I've read up on the likely GC issues and their
> remedies. I've implemented the recommended solutions and still to no
> avail.
>
> Here's what I've tried:
>
> (1) increased the RAM to 4G
Which is the exact size of your RAM?
> (2) set -XX:+UseConcMarkSweepGC
> (3) set -XX:+UseParNewGC
> (4) set -XX:CMSInitiatingOccupancyFraction=N where I've attempted N=[40..70]
> (5) I've called context.progress() in the reducer before and after
> reading and writing
> (6) memstore is enabled
>
> Is there anything else that I might have missed?
>
> Thanks,
>
> Greg
I´m seeing in the HBase logs that there a lot of block requests that are 
failing.
Can you send to us the ouput of JPS?
Can you check the filesystem´s health with hadoop fsck?

Another thing that I´m seeing is that one of your main process is 
compaction,
so you can optimize all this inceasing the size of your regions (by 
defaulf the size of a
region is 256 MB), but you will have in your hands a "split/compaction 
storm" like
Lars called them on his book.

Instead using the default mechanism for region spliting and compaction, 
you can turn it
off and do it manually with the split and major_compaction commands.

You can evaluate to use compresion in your cluster to save a lot of 
space in your region servers.

Which is the size of your cluster?
You can use SPM, Ganglia or OpenTSDB to monitor constantly your cluster.

Best wishes
>
> hbase logs
> ========
>
> 2012-10-01 19:09:48,293
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/.tmp/d2ee47650b224189b0c27d1c20929c03
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> 2012-10-01 19:09:48,884
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 5 file(s) in U of
> orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
> into d2ee47650b224189b0c27d1c20929c03, size=723.0m; total size for
> store is 723.0m
> 2012-10-01 19:09:48,884
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.,
> storeName=U, fileCount=5, fileSize=1.4g, priority=2,
> time=10631266687564968; duration=35sec
> 2012-10-01 19:09:48,886
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
> 2012-10-01 19:09:48,887
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 5
> file(s) in U of
> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp,
> seqid=132201184, totalSize=1.4g
> 2012-10-01 19:10:04,191
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp/2e5534fea8b24eaf9cc1e05dea788c01
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> 2012-10-01 19:10:04,868
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 5 file(s) in U of
> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
> into 2e5534fea8b24eaf9cc1e05dea788c01, size=626.5m; total size for
> store is 626.5m
> 2012-10-01 19:10:04,868
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
> storeName=U, fileCount=5, fileSize=1.4g, priority=2,
> time=10631266696614208; duration=15sec
> 2012-10-01 19:14:04,992
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
> 2012-10-01 19:14:04,993
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp,
> seqid=132198830, totalSize=863.8m
> 2012-10-01 19:14:19,147
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp/b741f8501ad248418c48262d751f6e86
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/U/b741f8501ad248418c48262d751f6e86
> 2012-10-01 19:14:19,381
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
> into b741f8501ad248418c48262d751f6e86, size=851.4m; total size for
> store is 851.4m
> 2012-10-01 19:14:19,381
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.,
> storeName=U, fileCount=2, fileSize=863.8m, priority=5,
> time=10631557965747111; duration=14sec
> 2012-10-01 19:14:19,381
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
> 2012-10-01 19:14:19,381
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp,
> seqid=132198819, totalSize=496.7m
> 2012-10-01 19:14:27,337
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp/78040c736c4149a884a1bdcda9916416
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/U/78040c736c4149a884a1bdcda9916416
> 2012-10-01 19:14:27,514
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
> into 78040c736c4149a884a1bdcda9916416, size=487.5m; total size for
> store is 487.5m
> 2012-10-01 19:14:27,514
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.,
> storeName=U, fileCount=3, fileSize=496.7m, priority=4,
> time=10631557966599560; duration=8sec
> 2012-10-01 19:14:27,514
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
> 2012-10-01 19:14:27,514
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp,
> seqid=132200816, totalSize=521.7m
> 2012-10-01 19:14:36,962
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp/0142b8bcdda948c185887358990af6d1
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/U/0142b8bcdda948c185887358990af6d1
> 2012-10-01 19:14:37,171
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
> into 0142b8bcdda948c185887358990af6d1, size=510.7m; total size for
> store is 510.7m
> 2012-10-01 19:14:37,171
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.,
> storeName=U, fileCount=3, fileSize=521.7m, priority=4,
> time=10631557967125617; duration=9sec
> 2012-10-01 19:14:37,172
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
> 2012-10-01 19:14:37,172
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp,
> seqid=132198832, totalSize=565.5m
> 2012-10-01 19:14:57,082
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp/44a27dce8df04306908579c22be76786
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/U/44a27dce8df04306908579c22be76786
> 2012-10-01 19:14:57,429
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
> into 44a27dce8df04306908579c22be76786, size=557.7m; total size for
> store is 557.7m
> 2012-10-01 19:14:57,429
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.,
> storeName=U, fileCount=3, fileSize=565.5m, priority=4,
> time=10631557967207683; duration=20sec
> 2012-10-01 19:14:57,429
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
> 2012-10-01 19:14:57,430
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp,
> seqid=132199414, totalSize=845.6m
> 2012-10-01 19:16:54,394
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp/771813ba0c87449ebd99d5e7916244f8
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/U/771813ba0c87449ebd99d5e7916244f8
> 2012-10-01 19:16:54,636
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
> into 771813ba0c87449ebd99d5e7916244f8, size=827.3m; total size for
> store is 827.3m
> 2012-10-01 19:16:54,636
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.,
> storeName=U, fileCount=3, fileSize=845.6m, priority=4,
> time=10631557967560440; duration=1mins, 57sec
> 2012-10-01 19:16:54,636
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
> 2012-10-01 19:16:54,637
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp,
> seqid=132198824, totalSize=1012.4m
> 2012-10-01 19:17:35,610
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp/771a4124c763468c8dea927cb53887ee
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/U/771a4124c763468c8dea927cb53887ee
> 2012-10-01 19:17:35,874
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
> into 771a4124c763468c8dea927cb53887ee, size=974.0m; total size for
> store is 974.0m
> 2012-10-01 19:17:35,875
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.,
> storeName=U, fileCount=3, fileSize=1012.4m, priority=4,
> time=10631557967678796; duration=41sec
> 2012-10-01 19:17:35,875
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
> 2012-10-01 19:17:35,875
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp,
> seqid=132198815, totalSize=530.5m
> 2012-10-01 19:17:47,481
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp/24328f8244f747bf8ba81b74ef2893fa
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/U/24328f8244f747bf8ba81b74ef2893fa
> 2012-10-01 19:17:47,741
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
> into 24328f8244f747bf8ba81b74ef2893fa, size=524.0m; total size for
> store is 524.0m
> 2012-10-01 19:17:47,741
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.,
> storeName=U, fileCount=3, fileSize=530.5m, priority=4,
> time=10631557967807915; duration=11sec
> 2012-10-01 19:17:47,741
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
> 2012-10-01 19:17:47,741
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp,
> seqid=132201190, totalSize=529.3m
> 2012-10-01 19:17:58,031
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp/cae48d1b96eb4440a7bcd5fa3b4c070b
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/U/cae48d1b96eb4440a7bcd5fa3b4c070b
> 2012-10-01 19:17:58,232
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
> into cae48d1b96eb4440a7bcd5fa3b4c070b, size=521.3m; total size for
> store is 521.3m
> 2012-10-01 19:17:58,232
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.,
> storeName=U, fileCount=3, fileSize=529.3m, priority=4,
> time=10631557967959079; duration=10sec
> 2012-10-01 19:17:58,232
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
> 2012-10-01 19:17:58,232
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp,
> seqid=132199205, totalSize=475.2m
> 2012-10-01 19:18:06,764
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp/ba51afdc860048b6b2e1047b06fb3b29
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/U/ba51afdc860048b6b2e1047b06fb3b29
> 2012-10-01 19:18:07,065
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
> into ba51afdc860048b6b2e1047b06fb3b29, size=474.5m; total size for
> store is 474.5m
> 2012-10-01 19:18:07,065
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.,
> storeName=U, fileCount=3, fileSize=475.2m, priority=4,
> time=10631557968104570; duration=8sec
> 2012-10-01 19:18:07,065
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
> 2012-10-01 19:18:07,065
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp,
> seqid=132198822, totalSize=522.5m
> 2012-10-01 19:18:18,306
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp/7a0bd16b11f34887b2690e9510071bf0
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/U/7a0bd16b11f34887b2690e9510071bf0
> 2012-10-01 19:18:18,439
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
> into 7a0bd16b11f34887b2690e9510071bf0, size=520.0m; total size for
> store is 520.0m
> 2012-10-01 19:18:18,440
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.,
> storeName=U, fileCount=2, fileSize=522.5m, priority=5,
> time=10631557965863914; duration=11sec
> 2012-10-01 19:18:18,440
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
> 2012-10-01 19:18:18,440
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp,
> seqid=132198823, totalSize=548.0m
> 2012-10-01 19:18:32,288
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp/dcd050acc2e747738a90aebaae8920e4
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/U/dcd050acc2e747738a90aebaae8920e4
> 2012-10-01 19:18:32,431
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
> into dcd050acc2e747738a90aebaae8920e4, size=528.2m; total size for
> store is 528.2m
> 2012-10-01 19:18:32,431
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.,
> storeName=U, fileCount=2, fileSize=548.0m, priority=5,
> time=10631557966071838; duration=13sec
> 2012-10-01 19:18:32,431
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
> 2012-10-01 19:18:32,431
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp,
> seqid=132199001, totalSize=475.9m
> 2012-10-01 19:18:43,154
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp/15a9167cd9754fd4b3674fe732648a03
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/U/15a9167cd9754fd4b3674fe732648a03
> 2012-10-01 19:18:43,322
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
> into 15a9167cd9754fd4b3674fe732648a03, size=475.9m; total size for
> store is 475.9m
> 2012-10-01 19:18:43,322
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.,
> storeName=U, fileCount=2, fileSize=475.9m, priority=5,
> time=10631557966273447; duration=10sec
> 2012-10-01 19:18:43,322
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
> 2012-10-01 19:18:43,322
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp,
> seqid=132198833, totalSize=824.8m
> 2012-10-01 19:19:00,252
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp/bf8da91da0824a909f684c3ecd0ee8da
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/U/bf8da91da0824a909f684c3ecd0ee8da
> 2012-10-01 19:19:00,788
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
> into bf8da91da0824a909f684c3ecd0ee8da, size=803.0m; total size for
> store is 803.0m
> 2012-10-01 19:19:00,788
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.,
> storeName=U, fileCount=2, fileSize=824.8m, priority=5,
> time=10631557966382580; duration=17sec
> 2012-10-01 19:19:00,788
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
> 2012-10-01 19:19:00,788
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp,
> seqid=132198810, totalSize=565.3m
> 2012-10-01 19:19:11,311
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp/5cd2032f48bc4287b8866165dcb6d3e6
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/U/5cd2032f48bc4287b8866165dcb6d3e6
> 2012-10-01 19:19:11,504
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
> into 5cd2032f48bc4287b8866165dcb6d3e6, size=553.5m; total size for
> store is 553.5m
> 2012-10-01 19:19:11,504
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.,
> storeName=U, fileCount=2, fileSize=565.3m, priority=5,
> time=10631557966480961; duration=10sec
> 2012-10-01 19:19:11,504
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
> 2012-10-01 19:19:11,504
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp,
> seqid=132198825, totalSize=519.6m
> 2012-10-01 19:19:22,186
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp/6f29b3b15f1747c196ac9aa79f4835b1
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/U/6f29b3b15f1747c196ac9aa79f4835b1
> 2012-10-01 19:19:22,437
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
> into 6f29b3b15f1747c196ac9aa79f4835b1, size=512.7m; total size for
> store is 512.7m
> 2012-10-01 19:19:22,437
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.,
> storeName=U, fileCount=2, fileSize=519.6m, priority=5,
> time=10631557966769107; duration=10sec
> 2012-10-01 19:19:22,437
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
> 2012-10-01 19:19:22,437
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp,
> seqid=132198836, totalSize=528.3m
> 2012-10-01 19:19:34,752
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp/d836630f7e2b4212848d7e4edc7238f1
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/U/d836630f7e2b4212848d7e4edc7238f1
> 2012-10-01 19:19:34,945
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
> into d836630f7e2b4212848d7e4edc7238f1, size=504.3m; total size for
> store is 504.3m
> 2012-10-01 19:19:34,945
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.,
> storeName=U, fileCount=2, fileSize=528.3m, priority=5,
> time=10631557967026388; duration=12sec
> 2012-10-01 19:19:34,945
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
> 2012-10-01 19:19:34,945
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp,
> seqid=132198841, totalSize=813.8m
> 2012-10-01 19:19:49,303
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp/c70692c971cd4e899957f9d5b189372e
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/U/c70692c971cd4e899957f9d5b189372e
> 2012-10-01 19:19:49,428
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
> into c70692c971cd4e899957f9d5b189372e, size=813.7m; total size for
> store is 813.7m
> 2012-10-01 19:19:49,428
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.,
> storeName=U, fileCount=2, fileSize=813.8m, priority=5,
> time=10631557967436197; duration=14sec
> 2012-10-01 19:19:49,428
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
> 2012-10-01 19:19:49,429
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp,
> seqid=132198642, totalSize=812.0m
> 2012-10-01 19:20:38,718
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp/bf99f97891ed42f7847a11cfb8f46438
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/U/bf99f97891ed42f7847a11cfb8f46438
> 2012-10-01 19:20:38,825
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
> into bf99f97891ed42f7847a11cfb8f46438, size=811.3m; total size for
> store is 811.3m
> 2012-10-01 19:20:38,825
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.,
> storeName=U, fileCount=2, fileSize=812.0m, priority=5,
> time=10631557968183922; duration=49sec
> 2012-10-01 19:20:38,826
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
> 2012-10-01 19:20:38,826
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp,
> seqid=132198138, totalSize=588.7m
> 2012-10-01 19:20:48,274
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp/9f44b7eeab58407ca98bb4ec90126035
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/U/9f44b7eeab58407ca98bb4ec90126035
> 2012-10-01 19:20:48,383
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
> into 9f44b7eeab58407ca98bb4ec90126035, size=573.4m; total size for
> store is 573.4m
> 2012-10-01 19:20:48,383
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.,
> storeName=U, fileCount=2, fileSize=588.7m, priority=5,
> time=10631557968302831; duration=9sec
> 2012-10-01 19:20:48,383
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
> 2012-10-01 19:20:48,383
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp,
> seqid=132198644, totalSize=870.8m
> 2012-10-01 19:21:04,998
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp/920844c25b1847c6ac4b880e8cf1d5b0
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/U/920844c25b1847c6ac4b880e8cf1d5b0
> 2012-10-01 19:21:05,107
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
> into 920844c25b1847c6ac4b880e8cf1d5b0, size=869.0m; total size for
> store is 869.0m
> 2012-10-01 19:21:05,107
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.,
> storeName=U, fileCount=2, fileSize=870.8m, priority=5,
> time=10631557968521590; duration=16sec
> 2012-10-01 19:21:05,107
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
> 2012-10-01 19:21:05,107
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp,
> seqid=132198622, totalSize=885.3m
> 2012-10-01 19:21:27,231
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp/c85d413975d642fc914253bd08f3484f
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/U/c85d413975d642fc914253bd08f3484f
> 2012-10-01 19:21:27,791
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
> into c85d413975d642fc914253bd08f3484f, size=848.3m; total size for
> store is 848.3m
> 2012-10-01 19:21:27,791
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.,
> storeName=U, fileCount=2, fileSize=885.3m, priority=5,
> time=10631557968628383; duration=22sec
> 2012-10-01 19:21:27,791
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
> 2012-10-01 19:21:27,791
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp,
> seqid=132198621, totalSize=796.5m
> 2012-10-01 19:21:42,374
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp/ce543c630dd142309af6dca2a9ab5786
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/U/ce543c630dd142309af6dca2a9ab5786
> 2012-10-01 19:21:42,515
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
> into ce543c630dd142309af6dca2a9ab5786, size=795.5m; total size for
> store is 795.5m
> 2012-10-01 19:21:42,516
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.,
> storeName=U, fileCount=2, fileSize=796.5m, priority=5,
> time=10631557968713853; duration=14sec
> 2012-10-01 19:49:58,159 [ResponseProcessor for block
> blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor
> exception  for block
> blk_5535637699691880681_51616301java.io.EOFException
>      at java.io.DataInputStream.readFully(DataInputStream.java:180)
>      at java.io.DataInputStream.readLong(DataInputStream.java:399)
>      at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:120)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2634)
>
> 2012-10-01 19:49:58,167 [IPC Server handler 87 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
> {"processingtimems":46208,"client":"10.100.102.155:38534","timeRange":[0,9223372036854775807],"starttimems":1349120951956,"responsesize":329939,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00322994","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
> 2012-10-01 19:49:58,160
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
> not heard from server in 56633ms for sessionid 0x137ec64368509f7,
> closing socket connection and attempting reconnect
> 2012-10-01 19:49:58,160 [regionserver60020] WARN
> org.apache.hadoop.hbase.util.Sleeper: We slept 49116ms instead of
> 3000ms, this is likely due to a long garbage collecting pause and it's
> usually bad, see
> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> 2012-10-01 19:49:58,160
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
> not heard from server in 53359ms for sessionid 0x137ec64368509f6,
> closing socket connection and attempting reconnect
> 2012-10-01 19:49:58,320 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] INFO
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 waiting for responder to exit.
> 2012-10-01 19:49:58,380 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:49:58,380 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:49:59,113 [regionserver60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: Unhandled
> exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
> rejected; currently processing
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
> org.apache.hadoop.hbase.YouAreDeadException:
> org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
> currently processing
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>      at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>      at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>      at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
>      at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:797)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:688)
>      at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
> currently processing
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>      at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:222)
>      at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:148)
>      at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:844)
>      at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
>      at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:918)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      at $Proxy8.regionServerReport(Unknown Source)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:794)
>      ... 2 more
> 2012-10-01 19:49:59,114 [regionserver60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:49:59,397 [IPC Server handler 36 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
> {"processingtimems":47521,"client":"10.100.102.176:60221","timeRange":[0,9223372036854775807],"starttimems":1349120951875,"responsesize":699312,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00318223","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
> 2012-10-01 19:50:00,355 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:00,355
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
> server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
> 2012-10-01 19:50:00,356
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section
> 'Client' could not be found. If you are not using SASL, you may ignore
> this. On the other hand, if you expected SASL to work, please fix your
> JAAS configuration.
> 2012-10-01 19:50:00,356 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 1 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
> retry...
> 2012-10-01 19:50:00,357
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
> session
> 2012-10-01 19:50:00,358
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
> server; r-o mode will be unavailable
> 2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
> expired from ZooKeeper, aborting
> org.apache.zookeeper.KeeperException$SessionExpiredException:
> KeeperErrorCode = Session expired
>      at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:374)
>      at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:271)
>      at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
>      at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
> 2012-10-01 19:50:00,359
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
> service, session 0x137ec64368509f6 has expired, closing socket
> connection
> 2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:00,367 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:00,367 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:00,381
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
> server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
> 2012-10-01 19:50:00,401 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unhandled
> exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
> rejected; currently processing
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
> 2012-10-01 19:50:00,403
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section
> 'Client' could not be found. If you are not using SASL, you may ignore
> this. On the other hand, if you expected SASL to work, please fix your
> JAAS configuration.
> 2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
> expired from ZooKeeper, aborting
> 2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
> org.apache.zookeeper.ClientCnxn: EventThread shut down
> 2012-10-01 19:50:00,412 [regionserver60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020
> 2012-10-01 19:50:00,413
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
> session
> 2012-10-01 19:50:00,413 [IPC Server handler 9 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 20 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 2 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 10 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server listener on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on
> 60020
> 2012-10-01 19:50:00,413 [IPC Server handler 12 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 21 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 13 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 19 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 22 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 11 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: Sending interrupt
> to stop the worker thread
> 2012-10-01 19:50:00,414 [IPC Server handler 6 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Stopping
> infoServer
> 2012-10-01 19:50:00,414 [IPC Server handler 0 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 28 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 7 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 15 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 5 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 48 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 14 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 18 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 37 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 47 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 50 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 45 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 36 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 43 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 42 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 38 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 8 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 40 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 34 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 4 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
> exiting
> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@5fa9b60a,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320394"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.117:56438: output error
> 2012-10-01 19:50:00,414 [IPC Server handler 61 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59104
> remote=/10.100.101.156:50010]. 59988 millis timeout left.
>      at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1243)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020
> caught: java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 31 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
> exiting
> 2012-10-01 19:50:00,414
> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
> SplitLogWorker interrupted while waiting for task, exiting:
> java.lang.InterruptedException
> 2012-10-01 19:50:00,563
> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
> SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 3201413024070455305:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59115
> remote=/10.100.101.156:50010]. 59999 millis timeout left.
>      at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>      at java.io.DataInputStream.readShort(DataInputStream.java:295)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,563 [IPC Server Responder] INFO
> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
> 2012-10-01 19:50:00,414 [IPC Server handler 27 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
> exiting
> 2012-10-01 19:50:00,414
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
> server; r-o mode will be unavailable
> 2012-10-01 19:50:00,414 [IPC Server handler 55 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block -2144655386884254555:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59108
> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>      at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>      at java.io.DataInputStream.readInt(DataInputStream.java:370)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1350)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,649
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
> service, session 0x137ec64368509f7 has expired, closing socket
> connection
> 2012-10-01 19:50:00,414 [IPC Server handler 39 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.173:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
> for block -2100467641393578191:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:48825
> remote=/10.100.102.173:50010]. 60000 millis timeout left.
>      at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,414 [IPC Server handler 26 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -5183799322211896791:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59078
> remote=/10.100.101.156:50010]. 59949 millis timeout left.
>      at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,414 [IPC Server handler 85 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -5183799322211896791:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59082
> remote=/10.100.101.156:50010]. 59950 millis timeout left.
>      at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,414 [IPC Server handler 57 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -1763662403960466408:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59116
> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>      at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>      at java.io.DataInputStream.readShort(DataInputStream.java:295)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,649 [IPC Server handler 79 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,649 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> This client just lost it's session with ZooKeeper, trying to
> reconnect.
> 2012-10-01 19:50:00,649 [IPC Server handler 89 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,649 [PRI IPC Server handler 3 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
> exiting
> 2012-10-01 19:50:00,649 [PRI IPC Server handler 0 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
> exiting
> 2012-10-01 19:50:00,700 [IPC Server handler 56 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
> exiting
> 2012-10-01 19:50:00,649 [PRI IPC Server handler 2 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 54 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
> exiting
> 2012-10-01 19:50:00,563 [IPC Server Responder] INFO
> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
> 2012-10-01 19:50:00,701 [IPC Server handler 71 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 79 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.193:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,563 [IPC Server handler 16 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,701 [PRI IPC Server handler 9 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
> exiting
> 2012-10-01 19:50:00,563 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,415 [IPC Server handler 60 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@7eee7b96,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321525"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.125:49043: output error
> 2012-10-01 19:50:00,704 [IPC Server handler 3 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 6550563574061266649:java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,717 [IPC Server handler 49 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 94 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 83 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [PRI IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [PRI IPC Server handler 7 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 82 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [PRI IPC Server handler 6 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
> exiting
> 2012-10-01 19:50:00,719 [IPC Server handler 16 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.107:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,701 [IPC Server handler 74 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
> exiting
> 2012-10-01 19:50:00,719 [IPC Server handler 86 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
> exiting
> 2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020
> caught: java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [PRI IPC Server handler 5 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [regionserver60020] INFO org.mortbay.log:
> Stopped SelectChannelConnector@0.0.0.0:60030
> 2012-10-01 19:50:00,722 [IPC Server handler 35 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 16 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.133:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,722 [IPC Server handler 98 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 68 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 64 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
> exiting
> 2012-10-01 19:50:00,673 [IPC Server handler 33 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 76 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
> exiting
> 2012-10-01 19:50:00,673 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Trying to reconnect to zookeeper
> 2012-10-01 19:50:00,736 [IPC Server handler 84 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 95 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 75 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 92 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 88 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 67 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 30 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 80 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 62 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 52 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 32 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 97 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 96 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 93 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 73 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.47:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,722 [IPC Server handler 87 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
> exiting
> 2012-10-01 19:50:00,721 [IPC Server handler 81 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
> exiting
> 2012-10-01 19:50:00,721 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,721 [IPC Server handler 59 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block -9081461281107361903:java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,719 [IPC Server handler 65 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
> exiting
> 2012-10-01 19:50:00,721 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedChannelException
>      at java.nio.channels.spi.AbstractSelectableChannel.configureBlocking(AbstractSelectableChannel.java:252)
>      at org.apache.hadoop.net.SocketIOWithTimeout.<init>(SocketIOWithTimeout.java:66)
>      at org.apache.hadoop.net.SocketInputStream$Reader.<init>(SocketInputStream.java:50)
>      at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:73)
>      at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:91)
>      at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:323)
>      at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:299)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1474)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,721 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59074
> remote=/10.100.101.156:50010]. 59947 millis timeout left.
>      at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,811 [IPC Server handler 59 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.135:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,719 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59107
> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>      at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,831 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.153:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,719 [IPC Server handler 39 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.144:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
> for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,719 [IPC Server handler 26 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.138:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,852 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.174:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,719 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block 5946486101046455013:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59091
> remote=/10.100.101.156:50010]. 59953 millis timeout left.
>      at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)

>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>      at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>      at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>      at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>      at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,719 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.148:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,719 [IPC Server handler 53 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
> exiting
> 2012-10-01 19:50:00,719 [IPC Server handler 79 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.154:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,719 [IPC Server handler 89 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.47:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,719 [IPC Server handler 46 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 4946845190538507957:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59113
> remote=/10.100.101.156:50010]. 59999 millis timeout left.
>      at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>      at java.io.DataInputStream.readShort(DataInputStream.java:295)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,895 [IPC Server handler 26 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.139:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,701 [IPC Server handler 91 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 3 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.114:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 6550563574061266649:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.134:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,717 [PRI IPC Server handler 4 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 77 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [PRI IPC Server handler 8 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 99 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 85 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.138:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,717 [IPC Server handler 51 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 57 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.138:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,717 [IPC Server handler 55 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.180:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,717 [IPC Server handler 70 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 61 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.174:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.173:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,705 [IPC Server handler 23 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,705 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,704 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>      at java.io.DataInputStream.read(DataInputStream.java:132)
>      at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>      at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>      at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>      at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>      at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>      at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.97:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.144:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,904 [regionserver60020-EventThread] INFO
> org.apache.zookeeper.ZooKeeper: Initiating client connection,
> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
> sessionTimeout=180000 watcher=hconnection
> 2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.72:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-2144655386884254555_51616216 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,904 [IPC Server handler 57 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.144:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,901 [IPC Server handler 85 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-5183799322211896791_51616591 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_5937357897784147544_51616546 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,899 [IPC Server handler 3 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_6550563574061266649_51616152 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,896 [IPC Server handler 46 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_4946845190538507957_51616628 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,896 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.133:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>      at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>      at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>      at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>      at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,896 [IPC Server handler 26 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-5183799322211896791_51616591 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,896 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.175:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,895 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.97:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,894 [IPC Server handler 39 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.151:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
> for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,894 [IPC Server handler 79 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_2209451090614340242_51616188 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,857 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.101:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,856 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.134:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,839 [IPC Server handler 59 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.194:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,811 [IPC Server handler 16 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_4946845190538507957_51616628 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,787 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.134:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,780 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.134:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,736 [IPC Server handler 63 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 72 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 78 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
> exiting
> 2012-10-01 19:50:00,906 [IPC Server handler 59 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-9081461281107361903_51616031 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,906 [IPC Server handler 39 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-2100467641393578191_51531005 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,906 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.145:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>      at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>      at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>      at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>      at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,905 [IPC Server handler 57 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-1763662403960466408_51616605 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.162:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-1763662403960466408_51616605 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,904 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.72:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_1768076108943205533_51616106 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,941 [regionserver60020-SendThread()] INFO
> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
> /10.100.102.197:2181
> 2012-10-01 19:50:00,941 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
> of this process is 20776@data3024.ngpipes.milp.ngmoco.com
> 2012-10-01 19:50:00,942
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section
> 'Client' could not be found. If you are not using SASL, you may ignore
> this. On the other hand, if you expected SASL to work, please fix your
> JAAS configuration.
> 2012-10-01 19:50:00,943
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
> session
> 2012-10-01 19:50:00,962
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
> server; r-o mode will be unavailable
> 2012-10-01 19:50:00,962
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
> sessionid = 0x137ec64373dd4b3, negotiated timeout = 40000
> 2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Reconnected successfully. This disconnect could have been caused by a
> network partition or a long-running GC pause, either way it's
> recommended that you verify your environment.
> 2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
> org.apache.zookeeper.ClientCnxn: EventThread shut down
> 2012-10-01 19:50:01,018 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>      at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>      at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>      at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>      at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,018 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.133:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>      at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>      at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>      at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>      at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_5946486101046455013_51616031 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:01,020 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.162:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,021 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,023 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.47:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,023 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.47:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,024 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.174:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,024 [IPC Server handler 61 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@20c6e4bc,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321393"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.118:57165: output error
> 2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:01,038 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.134:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
> exiting
> 2012-10-01 19:50:01,038 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.148:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.97:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.153:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_1768076108943205533_51616106 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.101:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,041 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,042 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.153:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,044 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.175:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>
> 2012-10-01 19:50:01,090 [IPC Server handler 29 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00321084/U:BAHAMUTIOS_1/1348883706322/Put,
> lastKey=00324324/U:user/1348900694793/Put, avgKeyLen=31,
> avgValueLen=125185, entries=6053, length=758129544,
> cur=00321312/U:KINGDOMSQUESTSIPAD_2/1349024761759/Put/vlen=460950]
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_8387547514055202675_51616042
> file=/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      ... 17 more
> 2012-10-01 19:50:01,091 [IPC Server handler 24 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00318964/U:user/1349118541276/Put/vlen=311046]
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_2851854722247682142_51616579
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      ... 14 more
> 2012-10-01 19:50:01,091 [IPC Server handler 1 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=0032027/U:KINGDOMSQUESTS_10/1349118531396/Put/vlen=401149]
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_3201413024070455305_51616611
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      ... 14 more
> 2012-10-01 19:50:01,091 [IPC Server handler 25 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00319173/U:TINYTOWERANDROID_3/1349024232716/Put/vlen=129419]
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_2851854722247682142_51616579
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      ... 14 more
> 2012-10-01 19:50:01,091 [IPC Server handler 90 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00316914/U:PETCAT_2/1349118542022/Put/vlen=499140]
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_5937357897784147544_51616546
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      ... 14 more
> 2012-10-01 19:50:01,091 [IPC Server handler 17 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00317054/U:BAHAMUTIOS_4/1348869430278/Put/vlen=104012]
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_5937357897784147544_51616546
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      ... 17 more
> 2012-10-01 19:50:01,091 [IPC Server handler 58 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00316983/U:TINYTOWERANDROID_1/1349118439250/Put/vlen=417924]
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_5937357897784147544_51616546
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>      ... 14 more
> 2012-10-01 19:50:01,091 [IPC Server handler 89 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00317043/U:BAHAMUTANDROID_7/1348968079952/Put/vlen=419212]
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>      at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>      at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>      at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>      at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_5937357897784147544_51616546
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>      at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>      at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>      at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>      ... 17 more
> 2012-10-01 19:50:01,094 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,094 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,093 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,093 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,092 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,092 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,091 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,095 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:50:01,097 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:50:01,115 [IPC Server handler 39 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@2743ecf8,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00390925"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.122:51758: output error
> 2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
> exiting
> 2012-10-01 19:50:01,151 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:01,151 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 2 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
> retry...
> 2012-10-01 19:50:01,153 [IPC Server handler 89 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@7137feec,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317043"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.68:55302: output error
> 2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
> exiting
> 2012-10-01 19:50:01,156 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@6b9a9eba,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321504"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.176:32793: output error
> 2012-10-01 19:50:01,157 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:01,158 [IPC Server handler 66 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
> exiting
> 2012-10-01 19:50:01,159 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@586761c,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00391525"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.155:39850: output error
> 2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
> exiting
> 2012-10-01 19:50:01,216 [regionserver60020.compactionChecker] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer$CompactionChecker:
> regionserver60020.compactionChecker exiting
> 2012-10-01 19:50:01,216 [regionserver60020.logRoller] INFO
> org.apache.hadoop.hbase.regionserver.LogRoller: LogRoller exiting.
> 2012-10-01 19:50:01,216 [regionserver60020.cacheFlusher] INFO
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
> regionserver60020.cacheFlusher exiting
> 2012-10-01 19:50:01,217 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: aborting server
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
> 2012-10-01 19:50:01,218 [regionserver60020] INFO
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Closed zookeeper sessionid=0x137ec64373dd4b3
> 2012-10-01 19:50:01,270
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,24294294,1349027918385.068e6f4f7b8a81fb21e49fe3ac47f262.
> 2012-10-01 19:50:01,271
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96510144,1348960969795.fe2a133a17d09a65a6b0d4fb60e6e051.
> 2012-10-01 19:50:01,272
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56499174,1349027424070.7f767ca333bef3dcdacc9a6c673a8350.
> 2012-10-01 19:50:01,273
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96515494,1348960969795.8ab4e1d9f4e4c388f3f4f18eec637e8a.
> 2012-10-01 19:50:01,273
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,98395724,1348969940123.08188cc246bf752c17cfe57f99970924.
> 2012-10-01 19:50:01,274
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
> 2012-10-01 19:50:01,275
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56604984,1348940650040.14639a082062e98abfea8ae3fff5d2c7.
> 2012-10-01 19:50:01,275
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56880144,1348969971950.ece85a086a310aacc2da259a3303e67e.
> 2012-10-01 19:50:01,276
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
> 2012-10-01 19:50:01,277
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,31267284,1348961229728.fc429276c44f5c274f00168f12128bad.
> 2012-10-01 19:50:01,278
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56569824,1348940809479.9808dac5b895fc9b8f9892c4b72b3804.
> 2012-10-01 19:50:01,279
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56425354,1349031095620.e4965f2e57729ff9537986da3e19258c.
> 2012-10-01 19:50:01,280
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96504305,1348964001164.77f75cf8ba76ebc4417d49f019317d0a.
> 2012-10-01 19:50:01,280
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,60743825,1348962513777.f377f704db5f0d000e36003338e017b1.
> 2012-10-01 19:50:01,283
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,09603014,1349026790546.d634bfe659bdf2f45ec89e53d2d38791.
> 2012-10-01 19:50:01,283
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,31274021,1348961229728.e93382b458a84c22f2e5aeb9efa737b5.
> 2012-10-01 19:50:01,285
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56462454,1348982699951.a2dafbd054bf65aa6f558dc9a2d839a1.
> 2012-10-01 19:50:01,286
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> Orwell,48814673,1348270987327.29818ea19d62126d5616a7ba7d7dae21.
> 2012-10-01 19:50:01,288
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56610954,1348940650040.3609c1bfc2be6936577b6be493e7e8d9.
> 2012-10-01 19:50:01,289
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
> 2012-10-01 19:50:01,289
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,05205763,1348941089603.957ea0e428ba6ff21174ecdda96f9fdc.
> 2012-10-01 19:50:01,289
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56349615,1348941138879.dfabbd25c59fd6c34a58d9eacf4c096f.
> 2012-10-01 19:50:01,292
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56503505,1349027424070.129160a78f13c17cc9ea16ff3757cda9.
> 2012-10-01 19:50:01,292
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,91248264,1348942310344.a93982b8f91f260814885bc0afb4fbb9.
> 2012-10-01 19:50:01,293
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,98646724,1348980566403.a4f2a16d1278ad1246068646c4886502.
> 2012-10-01 19:50:01,293
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56454594,1348982903997.7107c6a1b2117fb59f68210ce82f2cc9.
> 2012-10-01 19:50:01,294
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56564144,1348940809479.636092bb3ec2615b115257080427d091.
> 2012-10-01 19:50:01,295
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_user_events,06252594,1348582793143.499f0a0f4704afa873c83f141f5e0324.
> 2012-10-01 19:50:01,296
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56617164,1348941287729.3992a80a6648ab62753b4998331dcfdf.
> 2012-10-01 19:50:01,296
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,98390944,1348969940123.af160e450632411818fa8d01b2c2ed0b.
> 2012-10-01 19:50:01,297
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56703743,1348941223663.5cc2fcb82080dbf14956466c31f1d27c.
> 2012-10-01 19:50:01,297
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
> 2012-10-01 19:50:01,298
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56693584,1348942631318.f01b179c1fad1f18b97b37fc8f730898.
> 2012-10-01 19:50:01,299
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_user_events,12140615,1348582250428.7822f7f5ceea852b04b586fdf34debff.
> 2012-10-01 19:50:01,300
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
> 2012-10-01 19:50:01,300
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96420705,1348942597601.a063e06eb840ee49bb88474ee8e22160.
> 2012-10-01 19:50:01,300
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
> 2012-10-01 19:50:01,300
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96432674,1348961425148.1a793cf2137b9599193a1e2d5d9749c5.
> 2012-10-01 19:50:01,302
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
> 2012-10-01 19:50:01,303
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,44371574,1348961840615.00f5b4710a43f2ee75d324bebb054323.
> 2012-10-01 19:50:01,304
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,562fc921,1348941189517.cff261c585416844113f232960c8d6b4.
> 2012-10-01 19:50:01,304
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56323831,1348941216581.0b0f3bdb03ce9e4f58156a4143018e0e.
> 2012-10-01 19:50:01,305
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56480194,1349028080664.03a7046ffcec7e1f19cdb2f9890a353e.
> 2012-10-01 19:50:01,306
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56418294,1348940288044.c872be05981c047e8c1ee4765b92a74d.
> 2012-10-01 19:50:01,306
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,53590305,1348940776419.4c98d7846622f2d8dad4e998dae81d2b.
> 2012-10-01 19:50:01,307
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96445963,1348942353563.66a0f602720191bf21a1dfd12eec4a35.
> 2012-10-01 19:50:01,307
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
> 2012-10-01 19:50:01,307
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56305294,1348941189517.20f67941294c259e2273d3e0b7ae5198.
> 2012-10-01 19:50:01,308
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56516115,1348981132325.0f753cb87c1163d95d9d10077d6308db.
> 2012-10-01 19:50:01,309
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56796924,1348941269761.843e0aee0b15d67b810c7b3fe5a2dda7.
> 2012-10-01 19:50:01,309
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56440004,1348941150045.7033cb81a66e405d7bf45cd55ab010e3.
> 2012-10-01 19:50:01,309
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56317864,1348941124299.0de45283aa626fc83b2c026e1dd8bfec.
> 2012-10-01 19:50:01,310
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56809673,1348941834500.08244d4ed5f7fdf6d9ac9c73fbfd3947.
> 2012-10-01 19:50:01,310
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56894864,1348970959541.fc19a6ffe18f29203369d32ad1b102ce.
> 2012-10-01 19:50:01,311
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56382491,1348940876960.2392137bf0f4cb695c08c0fb22ce5294.
> 2012-10-01 19:50:01,312
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,95128264,1349026585563.5dc569af8afe0a84006b80612c15007f.
> 2012-10-01 19:50:01,312
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,5631146,1348941124299.b7c10be9855b5e8ba3a76852920627f9.
> 2012-10-01 19:50:01,312
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56710424,1348940462668.a370c149c232ebf4427e070eb28079bc.
> 2012-10-01 19:50:01,314 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Session: 0x137ec64373dd4b3 closed
> 2012-10-01 19:50:01,314 [regionserver60020-EventThread] INFO
> org.apache.zookeeper.ClientCnxn: EventThread shut down
> 2012-10-01 19:50:01,314 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 78
> regions to close
> 2012-10-01 19:50:01,317
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96497834,1348964001164.0b12f37b74b2124ef9f27d1ef0ebb17a.
> 2012-10-01 19:50:01,318
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56507574,1349027965795.79113c51d318a11286b39397ebbfdf04.
> 2012-10-01 19:50:01,319
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,24297525,1349027918385.047533f3d801709a26c895a01dcc1a73.
> 2012-10-01 19:50:01,320
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96439694,1348961425148.038e0e43a6e56760e4daae6f34bfc607.
> 2012-10-01 19:50:01,320
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,82811715,1348904784424.88fae4279f9806bef745d90f7ad37241.
> 2012-10-01 19:50:01,321
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56699434,1348941223663.ef3ccf0af60ee87450806b393f89cb6e.
> 2012-10-01 19:50:01,321
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
> 2012-10-01 19:50:01,322
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
> 2012-10-01 19:50:01,322
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
> 2012-10-01 19:50:01,323
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56465563,1348982699951.f34a29c0c4fc32e753d12db996ccc995.
> 2012-10-01 19:50:01,324
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56450734,1349027937173.c70110b3573a48299853117c4287c7be.
> 2012-10-01 19:50:01,325
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56361984,1349029457686.6c8d6974741e59df971da91c7355de1c.
> 2012-10-01 19:50:01,327
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56814705,1348962077056.69fd74167a3c5c2961e45d339b962ca9.
> 2012-10-01 19:50:01,327
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,00389105,1348978080963.6463149a16179d4e44c19bb49e4b4a81.
> 2012-10-01 19:50:01,329
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56558944,1348940893836.03bd1c0532949ec115ca8d5215dbb22f.
> 2012-10-01 19:50:01,330 [IPC Server handler 59 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@112ba2bf,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00392783"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.135:34935: output error
> 2012-10-01 19:50:01,330
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,5658955,1349027142822.e65d0c1f452cb41d47ad08560c653607.
> 2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:01,331
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56402364,1349049689267.27b452f3bcce0815b7bf92370cbb51de.
> 2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
> exiting
> 2012-10-01 19:50:01,332
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96426544,1348942597601.addf704f99dd1b2e07b3eff505e2c811.
> 2012-10-01 19:50:01,333
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,60414161,1348962852909.c6b1b21f00bbeef8648c4b9b3d28b49a.
> 2012-10-01 19:50:01,333
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56552794,1348940893836.5314886f88f6576e127757faa25cef7c.
> 2012-10-01 19:50:01,335
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56910924,1348962040261.fdedae86206fc091a72dde52a3d0d0b4.
> 2012-10-01 19:50:01,335
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56720084,1349029064698.ee5cb00ab358be0d2d36c59189da32f8.
> 2012-10-01 19:50:01,336
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56624533,1348941287729.6121fce2c31d4754b4ad4e855d85b501.
> 2012-10-01 19:50:01,336
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56899934,1348970959541.f34f01dd65e293cb6ab13de17ac91eec.
> 2012-10-01 19:50:01,337
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
> 2012-10-01 19:50:01,337
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56405923,1349049689267.bb4be5396608abeff803400cdd2408f4.
> 2012-10-01 19:50:01,338
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56364924,1349029457686.1e1c09b6eb734d8ad48ea0b4fa103381.
> 2012-10-01 19:50:01,339
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56784073,1348961864297.f01eaf712e59a0bca989ced951caf4f1.
> 2012-10-01 19:50:01,340
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56594534,1349027142822.8e67bb85f4906d579d4d278d55efce0b.
> 2012-10-01 19:50:01,340
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
> 2012-10-01 19:50:01,340
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56491525,1349027928183.7bbfb4d39ef4332e17845001191a6ad4.
> 2012-10-01 19:50:01,341
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,07123624,1348959804638.c114ec80c6693a284741e220da028736.
> 2012-10-01 19:50:01,342
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
> 2012-10-01 19:50:01,342
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56546534,1348941049708.bde2614732f938db04fdd81ed6dbfcf2.
> 2012-10-01 19:50:01,343
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,569054,1348962040261.a7942d7837cd57b68d156d2ce7e3bd5f.
> 2012-10-01 19:50:01,343
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56576714,1348982931576.3dd5bf244fb116cf2b6f812fcc39ad2d.
> 2012-10-01 19:50:01,344
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,5689007,1348963034009.c4b16ea4d8dbc66c301e67d8e58a7e48.
> 2012-10-01 19:50:01,344
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56410784,1349027912141.6de7be1745c329cf9680ad15e9bde594.
> 2012-10-01 19:50:01,345
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
> 2012-10-01 19:50:01,345
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96457954,1348964300132.674a03f0c9866968aabd70ab38a482c0.
> 2012-10-01 19:50:01,346
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56483084,1349027988535.de732d7e63ea53331b80255f51fc1a86.
> 2012-10-01 19:50:01,347
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56790484,1348941269761.5bcc58c48351de449cc17307ab4bf777.
> 2012-10-01 19:50:01,348
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56458293,1348982903997.4f67e6f4949a2ef7f4903f78f54c474e.
> 2012-10-01 19:50:01,348
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,95123235,1349026585563.a359eb4cb88d34a529804e50a5affa24.
> 2012-10-01 19:50:01,349
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
> 2012-10-01 19:50:01,350
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56368484,1348941099873.cef2729093a0d7d72b71fac1b25c0a40.
> 2012-10-01 19:50:01,350
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,17499894,1349026916228.630196a553f73069b9e568e6912ef0c5.
> 2012-10-01 19:50:01,351
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56375315,1348940876960.40cf6dfa370ce7f1fc6c1a59ba2f2191.
> 2012-10-01 19:50:01,351
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,95512574,1349009451986.e4d292eb66d16c21ef8ae32254334850.
> 2012-10-01 19:50:01,352
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
> 2012-10-01 19:50:01,352
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
> 2012-10-01 19:50:01,353
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56432705,1348941150045.07aa626f3703c7b4deaba1263c71894d.
> 2012-10-01 19:50:01,353
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,13118725,1349026772953.c0be859d4a4dc2246d764a8aad58fe88.
> 2012-10-01 19:50:01,354
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56520814,1348981132325.c2f16fd16f83aa51769abedfe8968bb6.
> 2012-10-01 19:50:01,354
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
> 2012-10-01 19:50:01,355
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56884434,1348963034009.616835869c81659a27eab896f48ae4e1.
> 2012-10-01 19:50:01,355
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56476541,1349028080664.341392a325646f24a3d8b8cd27ebda19.
> 2012-10-01 19:50:01,357
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56803462,1348941834500.6313b36f1949381d01df977a182e6140.
> 2012-10-01 19:50:01,357
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96464524,1348964300132.7a15f1e8e28f713212c516777267c2bf.
> 2012-10-01 19:50:01,358
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56875074,1348969971950.3e408e7cb32c9213d184e10bf42837ad.
> 2012-10-01 19:50:01,359
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,42862354,1348981565262.7ad46818060be413140cdcc11312119d.
> 2012-10-01 19:50:01,359
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed

> orwell_events,56582264,1349028973106.b481b61be387a041a3f259069d5013a6.
> 2012-10-01 19:50:01,360
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56579105,1348982931576.1561a22c16263dccb8be07c654b43f2f.
> 2012-10-01 19:50:01,360
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56723415,1348946404223.38d992d687ad8925810be4220a732b13.
> 2012-10-01 19:50:01,361
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,4285921,1348981565262.7a2cbd8452b9e406eaf1a5ebff64855a.
> 2012-10-01 19:50:01,362
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56336394,1348941231573.ca52393a2eabae00a64f65c0b657b95a.
> 2012-10-01 19:50:01,363
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96452715,1348942353563.876edfc6e978879aac42bfc905a09c26.
> 2012-10-01 19:50:01,363
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
> 2012-10-01 19:50:01,364
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56525625,1348941298909.ccf16ed8e761765d2989343c7670e94f.
> 2012-10-01 19:50:01,365
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,97578484,1348938848996.98ecacc61ae4c5b3f7a3de64bec0e026.
> 2012-10-01 19:50:01,365
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56779025,1348961864297.cc13f0a6f5e632508f2e28a174ef1488.
> 2012-10-01 19:50:01,366
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
> 2012-10-01 19:50:01,366
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_user_events,43323443,1348591057882.8b0ab02c33f275114d89088345f58885.
> 2012-10-01 19:50:01,367
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
> 2012-10-01 19:50:01,367
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56686234,1348942631318.69270cd5013f8ca984424e508878e428.
> 2012-10-01 19:50:01,368
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,98642625,1348980566403.2277d2ef1d53d40d41cd23846619a3f8.
> 2012-10-01 19:50:01,524 [IPC Server handler 57 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_3201413024070455305_51616611 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:02,462 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 2
> regions to close
> 2012-10-01 19:50:02,462 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:50:02,462 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:50:02,495 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:02,496 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 3 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
> retry...
> 2012-10-01 19:50:02,686 [IPC Server handler 46 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@504b62c6,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320404"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.172:53925: output error
> 2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
> exiting
> 2012-10-01 19:50:02,809 [IPC Server handler 55 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@45f1c31e,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322424"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.178:35016: output error
> 2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
> exiting
> 2012-10-01 19:50:03,496 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:50:03,496 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:50:03,510 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:03,510 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 4 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
> retry...
> 2012-10-01 19:50:05,299 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:50:05,299 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:50:05,314 [IPC Server handler 3 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@472aa9fe,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321694"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.176:42371: output error
> 2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
> exiting
> 2012-10-01 19:50:05,329 [IPC Server handler 16 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@42987a12,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320293"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.135:35132: output error
> 2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
> exiting
> 2012-10-01 19:50:05,638 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:05,638 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 5 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
> retry...
> 2012-10-01 19:50:05,641 [IPC Server handler 26 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@a9c09e8,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319505"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.183:60078: output error
> 2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
> exiting
> 2012-10-01 19:50:05,664 [IPC Server handler 57 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@349d7b4,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319915"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.141:58290: output error
> 2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
> exiting
> 2012-10-01 19:50:07,063 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:50:07,063 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:50:07,076 [IPC Server handler 23 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@5ba03734,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319654"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.161:43227: output error
> 2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
> exiting
> 2012-10-01 19:50:07,089 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:07,090 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 6 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010.
> Marking primary datanode as bad.
> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@3d19e607,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319564"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.82:42779: output error
> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
> exiting
> 2012-10-01 19:50:07,181
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@5920511b,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322014"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.88:49489: output error
> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
> exiting
> 2012-10-01 19:50:08,064 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 1
> regions to close
> 2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
> org.apache.hadoop.hbase.regionserver.Leases:
> regionserver60020.leaseChecker closing leases
> 2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
> org.apache.hadoop.hbase.regionserver.Leases:
> regionserver60020.leaseChecker closed leases
> 2012-10-01 19:50:08,508 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:08,508 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 1 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:09,652 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:09,653 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 2 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:10,697 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:10,697 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 3 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:12,278 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:12,279 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 4 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:13,294 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:13,294 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 5 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:14,306 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:14,306 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 6 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Marking primary datanode as
> bad.
> 2012-10-01 19:50:15,317 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:15,318 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 1 times.  Pipeline was
> 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:16,375 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:16,376 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 2 times.  Pipeline was
> 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:17,385 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:17,385 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 3 times.  Pipeline was
> 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:18,395 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:18,395 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 4 times.  Pipeline was
> 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:19,404 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:19,405 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 5 times.  Pipeline was
> 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:20,414 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>      at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>      at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>      at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy4.nextGenerationStamp(Unknown Source)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>      at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>      at java.security.AccessController.doPrivileged(Native Method)
>      at javax.security.auth.Subject.doAs(Subject.java:396)
>      at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>      at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy14.recoverBlock(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,415 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
> 2012-10-01 19:50:20,415 [IPC Server handler 58 on 60020] ERROR
> org.apache.hadoop.hdfs.DFSClient: Exception closing file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> : java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,415 [IPC Server handler 69 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,415 [regionserver60020.logSyncer] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] INFO
> org.apache.hadoop.fs.FileSystem: Could not cancel cleanup thread,
> though no FileSystems are open
> 2012-10-01 19:50:20,417 [regionserver60020.logSyncer] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,417 [regionserver60020.logSyncer] FATAL
> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
> Requesting close of hlog
> java.io.IOException: Reflection
>      at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>      at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.reflect.InvocationTargetException
>      at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>      ... 4 more
> Caused by: java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,418 [regionserver60020.logSyncer] ERROR
> org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
> requesting close of hlog
> java.io.IOException: Reflection
>      at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>      at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.reflect.InvocationTargetException
>      at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>      ... 4 more
> Caused by: java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
> Requesting close of hlog
> java.io.IOException: Reflection
>      at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog.append(HLog.java:1033)
>      at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:1852)
>      at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1723)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3076)
>      at sun.reflect.GeneratedMethodAccessor96.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.lang.reflect.InvocationTargetException
>      at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>      ... 11 more
> Caused by: java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,417 [IPC Server handler 29 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>      at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      ... 9 more
> Caused by: java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      ... 21 more
> 2012-10-01 19:50:20,417 [IPC Server handler 24 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>      at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      ... 9 more
> Caused by: java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      ... 21 more
> 2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,417 [IPC Server handler 1 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>      at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      ... 9 more
> Caused by: java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      ... 21 more
> 2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,417 [IPC Server handler 25 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>      at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      ... 9 more
> Caused by: java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      ... 21 more
> 2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,417 [IPC Server handler 90 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>      at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      ... 9 more
> Caused by: java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      ... 21 more
> 2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>      at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      ... 9 more
> Caused by: java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      ... 21 more
> 2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,417 [IPC Server handler 17 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>      at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>      at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>      at $Proxy7.getFileInfo(Unknown Source)
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>      at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>      at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>      ... 9 more
> Caused by: java.lang.InterruptedException
>      at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>      at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>      at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>      at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>      at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>      ... 21 more
> 2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,

> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,420 [IPC Server handler 69 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: (responseTooSlow):
> {"processingtimems":22039,"call":"multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb),
> rpc version=1, client version=29,
> methodsFingerPrint=54742778","client":"10.100.102.155:39852","starttimems":1349120998380,"queuetimems":0,"class":"HRegionServer","responsesize":0,"method":"multi"}
> 2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1575,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,420
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING
> region server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
> Unrecoverable exception while closing region
> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
> still finishing close
> java.io.IOException: Filesystem closed
>      at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
>      at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
>      at java.io.FilterInputStream.close(FilterInputStream.java:155)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
>      at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
>      at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
>      at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
>      at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
>      at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
>      at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
>      at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>      at java.lang.Thread.run(Thread.java:662)
> 2012-10-01 19:50:20,426
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,419 [IPC Server handler 29 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,426
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
> metrics: requestsPerSecond=0, numberOfOnlineRegions=136,
> numberOfStores=136, numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,426 [IPC Server handler 29 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,445 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>      at $Proxy8.reportRSFatalError(Unknown Source)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedChannelException
>      at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>      at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      ... 11 more
> Caused by: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.FilterInputStream.read(FilterInputStream.java:116)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>      at java.io.DataInputStream.readInt(DataInputStream.java:370)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> 2012-10-01 19:50:20,446 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>      at $Proxy8.reportRSFatalError(Unknown Source)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedByInterruptException
>      at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>      at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      ... 11 more
> Caused by: java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>      at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>      at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>      at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>      at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>      at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
>      at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
>      ... 12 more
> 2012-10-01 19:50:20,447 [IPC Server handler 29 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,446 [IPC Server handler 58 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,446 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>      at $Proxy8.reportRSFatalError(Unknown Source)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>      at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:328)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:362)
>      at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1045)
>      at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:897)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      ... 11 more
> 2012-10-01 19:50:20,448 [IPC Server handler 17 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,445 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>      at $Proxy8.reportRSFatalError(Unknown Source)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedChannelException
>      at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>      at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      ... 11 more
> Caused by: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.FilterInputStream.read(FilterInputStream.java:116)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>      at java.io.DataInputStream.readInt(DataInputStream.java:370)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> 2012-10-01 19:50:20,448 [IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,445
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to
> report fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>      at $Proxy8.reportRSFatalError(Unknown Source)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>      at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:131)
>      at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>      at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedByInterruptException
>      at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>      at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      ... 7 more
> Caused by: java.nio.channels.ClosedByInterruptException
>      at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>      at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>      at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>      at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>      at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>      at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
>      at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      at $Proxy8.reportRSFatalError(Unknown Source)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:20,450
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
> Unrecoverable exception while closing region
> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
> still finishing close
> 2012-10-01 19:50:20,445 [IPC Server handler 69 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb), rpc
> version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.155:39852: output error
> 2012-10-01 19:50:20,445 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>      at $Proxy8.reportRSFatalError(Unknown Source)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedChannelException
>      at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>      at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      ... 11 more
> Caused by: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.FilterInputStream.read(FilterInputStream.java:116)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>      at java.io.DataInputStream.readInt(DataInputStream.java:370)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> 2012-10-01 19:50:20,451 [IPC Server handler 24 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,445 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>      at $Proxy8.reportRSFatalError(Unknown Source)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedChannelException
>      at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>      at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      ... 11 more
> Caused by: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.FilterInputStream.read(FilterInputStream.java:116)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>      at java.io.DataInputStream.readInt(DataInputStream.java:370)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> 2012-10-01 19:50:20,451 [IPC Server handler 90 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,445 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>      at $Proxy8.reportRSFatalError(Unknown Source)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>      at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedChannelException
>      at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>      at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>      at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      ... 11 more
> Caused by: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>      at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>      at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>      at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>      at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>      at java.io.FilterInputStream.read(FilterInputStream.java:116)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>      at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>      at java.io.DataInputStream.readInt(DataInputStream.java:370)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>      at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> 2012-10-01 19:50:20,452 [IPC Server handler 25 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@5d72e577,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321312"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.184:34111: output error
> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@2237178f,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316983"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.188:59581: output error
> 2012-10-01 19:50:20,450 [IPC Server handler 69 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
> exiting
> 2012-10-01 19:50:20,450
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable
> while processing event M_RS_CLOSE_REGION
> java.lang.RuntimeException: java.io.IOException: Filesystem closed
>      at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:133)
>      at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>      at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Filesystem closed
>      at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
>      at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
>      at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
>      at java.io.FilterInputStream.close(FilterInputStream.java:155)
>      at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
>      at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
>      at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
>      at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
>      at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
>      at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
>      at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
>      ... 4 more
> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@573dba6d,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"0032027"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.183:60076: output error
> 2012-10-01 19:50:20,452 [IPC Server handler 69 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
> exiting
> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@4eebbed5,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317054"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.146:40240: output error
> 2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:20,453 [IPC Server handler 29 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
> exiting
> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
> exiting
> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
> exiting
> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@4ff0ed4a,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00318964"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.172:53924: output error
> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
> exiting
> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@526abe46,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316914"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.184:34110: output error
> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
> exiting
> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@5df20fef,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319173"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.146:40243: output error
> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020
> caught: java.nio.channels.ClosedChannelException
>      at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>      at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>      at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>      at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>
> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
> exiting
> 2012-10-01 19:50:21,066
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] FATAL
> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
> Requesting close of hlog
> java.io.IOException: Reflection
>      at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>      at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.reflect.InvocationTargetException
>      at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>      ... 4 more
> Caused by: java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:21,419 [regionserver60020.logSyncer] ERROR
> org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
> requesting close of hlog
> java.io.IOException: Reflection
>      at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>      at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>      at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.reflect.InvocationTargetException
>      at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>      at java.lang.reflect.Method.invoke(Method.java:597)
>      at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>      ... 4 more
> Caused by: java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>      at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:22,066 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; all regions
> closed.
> 2012-10-01 19:50:22,066 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closing
> leases
> 2012-10-01 19:50:22,066 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closed
> leases
> 2012-10-01 19:50:22,082 [regionserver60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed deleting my
> ephemeral node
> org.apache.zookeeper.KeeperException$SessionExpiredException:
> KeeperErrorCode = Session expired for
> /hbase/rs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
>      at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
>      at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>      at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:868)
>      at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:107)
>      at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:962)
>      at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:951)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:964)
>      at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:762)
>      at java.lang.Thread.run(Thread.java:662)
> 2012-10-01 19:50:22,082 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; zookeeper
> connection closed.
> 2012-10-01 19:50:22,082 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: regionserver60020
> exiting
> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
> starting; hbase.shutdown.hook=true;
> fsShutdownHook=Thread[Thread-5,5,main]
> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown
> hook
> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs
> shutdown hook thread.
> 2012-10-01 19:50:22,124 [Shutdownhook:regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
> finished.
> Mon Oct  1 19:54:10 UTC 2012 Starting regionserver on
> data3024.ngpipes.milp.ngmoco.com
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 20
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 16382
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 32768
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) unlimited
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 2012-10-01 19:54:11,355 [main] INFO
> org.apache.hadoop.hbase.util.VersionInfo: HBase 0.92.1
> 2012-10-01 19:54:11,356 [main] INFO
> org.apache.hadoop.hbase.util.VersionInfo: Subversion
> https://svn.apache.org/repos/asf/hbase/branches/0.92 -r 1298924
> 2012-10-01 19:54:11,356 [main] INFO
> org.apache.hadoop.hbase.util.VersionInfo: Compiled by jenkins on Fri
> Mar  9 16:58:34 UTC 2012
> 2012-10-01 19:54:11,513 [main] INFO
> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java
> HotSpot(TM) 64-Bit Server VM, vmVendor=Sun Microsystems Inc.,
> vmVersion=20.1-b02
> 2012-10-01 19:54:11,513 [main] INFO
> org.apache.hadoop.hbase.util.ServerCommandLine:
> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx4000m,
> -XX:NewSize=128m, -XX:MaxNewSize=128m,
> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
> -XX:CMSInitiatingOccupancyFraction=75, -verbose:gc,
> -XX:+PrintGCDetails, -XX:+PrintGCTimeStamps,
> -Xloggc:/data2/hbase_log/gc-hbase.log,
> -Dcom.sun.management.jmxremote.authenticate=true,
> -Dcom.sun.management.jmxremote.ssl=false,
> -Dcom.sun.management.jmxremote.password.file=/home/hadoop/hadoop/conf/jmxremote.password,
> -Dcom.sun.management.jmxremote.port=8010,
> -Dhbase.log.dir=/data2/hbase_log,
> -Dhbase.log.file=hbase-hadoop-regionserver-data3024.ngpipes.milp.ngmoco.com.log,
> -Dhbase.home.dir=/home/hadoop/hbase, -Dhbase.id.str=hadoop,
> -Dhbase.root.logger=INFO,DRFA,
> -Djava.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64]
> 2012-10-01 19:54:11,964 [IPC Reader 0 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,967 [IPC Reader 1 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,970 [IPC Reader 2 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,973 [IPC Reader 3 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,976 [IPC Reader 4 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,979 [IPC Reader 5 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,982 [IPC Reader 6 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,985 [IPC Reader 7 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,988 [IPC Reader 8 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,991 [IPC Reader 9 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:12,002 [main] INFO
> org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics
> with hostName=HRegionServer, port=60020
> 2012-10-01 19:54:12,081 [main] INFO
> org.apache.hadoop.hbase.io.hfile.CacheConfig: Allocating LruBlockCache
> with maximum size 996.8m
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48
> GMT
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:host.name=data3024.ngpipes.milp.ngmoco.com
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0_26
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun
> Microsystems Inc.
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:java.class.path=/home/hadoop/hbase/conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase:/home/hadoop/hbase/hbase-0.92.1.jar:/home/hadoop/hbase/hbase-0.92.1-tests.jar:/home/hadoop/hbase/lib/activation-1.1.jar:/home/hadoop/hbase/lib/asm-3.1.jar:/home/hadoop/hbase/lib/avro-1.5.3.jar:/home/hadoop/hbase/lib/avro-ipc-1.5.3.jar:/home/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hbase/lib/commons-cli-1.2.jar:/home/hadoop/hbase/lib/commons-codec-1.4.jar:/home/hadoop/hbase/lib/commons-collections-3.2.1.jar:/home/hadoop/hbase/lib/commons-configuration-1.6.jar:/home/hadoop/hbase/lib/commons-digester-1.8.jar:/home/hadoop/hbase/lib/commons-el-1.0.jar:/home/hadoop/hbase/lib/commons-httpclient-3.1.jar:/home/hadoop/hbase/lib/commons-lang-2.5.jar:/home/hadoop/hbase/lib/commons-logging-1.1.1.jar:/home/hadoop/hbase/lib/commons-math-2.1.jar:/home/hadoop/hbase/lib/commons-net-1.4.1.jar:/home/hadoop/hbase/lib/core-3.1.1.jar:/home/hadoop/hbase/lib/guava-r09.jar:/home/hadoop/hbase/lib/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hbase/lib/hadoop-lzo-0.4.9.jar:/home/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/home/hadoop/hbase/lib/httpclient-4.0.1.jar:/home/hadoop/hbase/lib/httpcore-4.0.1.jar:/home/hadoop/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/home/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase/lib/jaxb-api-2.1.jar:/home/hadoop/hbase/lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase/lib/jersey-core-1.4.jar:/home/hadoop/hbase/lib/jersey-json-1.4.jar:/home/hadoop/hbase/lib/jersey-server-1.4.jar:/home/hadoop/hbase/lib/jettison-1.1.jar:/home/hadoop/hbase/lib/jetty-6.1.26.jar:/home/hadoop/hbase/lib/jetty-util-6.1.26.jar:/home/hadoop/hbase/lib/jruby-complete-1.6.5.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/libthrift-0.7.0.jar:/home/hadoop/hbase/lib/log4j-1.2.16.jar:/home/hadoop/hbase/lib/netty-3.2.4.Final.jar:/home/hadoop/hbase/lib/protobuf-java-2.4.0a.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/servlet-api-2.5.jar:/home/hadoop/hbase/lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/home/hadoop/hbase/lib/stax-api-1.0.1.jar:/home/hadoop/hbase/lib/velocity-1.7.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/zookeeper-3.4.3.jar:
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:java.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:os.version=2.6.35-30-generic
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:user.name=hadoop
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:user.home=/home/hadoop/
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:user.dir=/home/gregross
> 2012-10-01 19:54:12,225 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Initiating client connection,
> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
> sessionTimeout=180000 watcher=regionserver:60020
> 2012-10-01 19:54:12,251 [regionserver60020-SendThread()] INFO
> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
> /10.100.102.197:2181
> 2012-10-01 19:54:12,252 [regionserver60020] INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
> of this process is 15403@data3024.ngpipes.milp.ngmoco.com
> 2012-10-01 19:54:12,259
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section
> 'Client' could not be found. If you are not using SASL, you may ignore
> this. On the other hand, if you expected SASL to work, please fix your
> JAAS configuration.
> 2012-10-01 19:54:12,260
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
> session
> 2012-10-01 19:54:12,272
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
> server; r-o mode will be unavailable
> 2012-10-01 19:54:12,273
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
> sessionid = 0x137ec64373dd4b5, negotiated timeout = 40000
> 2012-10-01 19:54:12,289 [main] INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown
> hook thread: Shutdownhook:regionserver60020
> 2012-10-01 19:54:12,352 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Initiating client connection,
> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
> sessionTimeout=180000 watcher=hconnection
> 2012-10-01 19:54:12,353 [regionserver60020-SendThread()] INFO
> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
> /10.100.102.197:2181
> 2012-10-01 19:54:12,353 [regionserver60020] INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
> of this process is 15403@data3024.ngpipes.milp.ngmoco.com
> 2012-10-01 19:54:12,354
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section
> 'Client' could not be found. If you are not using SASL, you may ignore
> this. On the other hand, if you expected SASL to work, please fix your
> JAAS configuration.
> 2012-10-01 19:54:12,354
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
> session
> 2012-10-01 19:54:12,361
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
> server; r-o mode will be unavailable
> 2012-10-01 19:54:12,361
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
> sessionid = 0x137ec64373dd4b6, negotiated timeout = 40000
> 2012-10-01 19:54:12,384 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
> globalMemStoreLimit=1.6g, globalMemStoreLimitLowMark=1.4g,
> maxHeap=3.9g
> 2012-10-01 19:54:12,400 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Runs every 2hrs,
> 46mins, 40sec
> 2012-10-01 19:54:12,420 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect
> to Master server at
> namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915
> 2012-10-01 19:54:12,453 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to
> master at data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020
> 2012-10-01 19:54:12,453 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at
> namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915 that we are
> up with port=60020, startcode=1349121252040
> 2012-10-01 19:54:12,476 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us
> hostname to use. Was=data3024.ngpipes.milp.ngmoco.com,
> Now=data3024.ngpipes.milp.ngmoco.com
> 2012-10-01 19:54:12,568 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.wal.HLog: HLog configuration:
> blocksize=64 MB, rollsize=60.8 MB, enabled=true,
> optionallogflushinternal=1000ms
> 2012-10-01 19:54:12,642 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.wal.HLog:  for
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1349121252040/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1349121252040.1349121252569
> 2012-10-01 19:54:12,643 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.wal.HLog: Using
> getNumCurrentReplicas--HDFS-826
> 2012-10-01 19:54:12,651 [regionserver60020] INFO
> org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
> with processName=RegionServer, sessionId=regionserver60020
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: revision
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: date
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: user
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: url
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: version
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: new MBeanInfo
> 2012-10-01 19:54:12,657 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: new MBeanInfo
> 2012-10-01 19:54:12,657 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics:
> Initialized
> 2012-10-01 19:54:12,722 [regionserver60020] INFO org.mortbay.log:
> Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-10-01 19:54:12,774 [regionserver60020] INFO
> org.apache.hadoop.http.HttpServer: Added global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2012-10-01 19:54:12,787 [regionserver60020] INFO
> org.apache.hadoop.http.HttpServer: Port returned by
> webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 60030
> 2012-10-01 19:54:12,787 [regionserver60020] INFO
> org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned
> 60030 webServer.getConnectors()[0].getLocalPort() returned 60030
> 2012-10-01 19:54:12,787 [regionserver60020] INFO
> org.apache.hadoop.http.HttpServer: Jetty bound to port 60030
> 2012-10-01 19:54:12,787 [regionserver60020] INFO org.mortbay.log: jetty-6.1.26
> 2012-10-01 19:54:13,079 [regionserver60020] INFO org.mortbay.log:
> Started SelectChannelConnector@0.0.0.0:60030
> 2012-10-01 19:54:13,079 [IPC Server Responder] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting
> 2012-10-01 19:54:13,079 [IPC Server listener on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60020:
> starting
> 2012-10-01 19:54:13,094 [IPC Server handler 0 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
> starting
> 2012-10-01 19:54:13,094 [IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 2 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 3 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 4 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 5 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 6 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 7 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 8 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 9 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 10 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 11 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 12 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 13 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 14 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
> starting
> 2012-10-01 19:54:13,097 [IPC Server handler 15 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
> starting
> 2012-10-01 19:54:13,097 [IPC Server handler 16 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
> starting
> 2012-10-01 19:54:13,097 [IPC Server handler 17 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
> starting
> 2012-10-01 19:54:13,097 [IPC Server handler 18 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 19 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 20 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 21 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 22 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 23 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 24 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 25 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
> starting
> 2012-10-01 19:54:13,099 [IPC Server handler 26 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
> starting
> 2012-10-01 19:54:13,099 [IPC Server handler 27 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
> starting
> 2012-10-01 19:54:13,099 [IPC Server handler 28 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
> starting
> 2012-10-01 19:54:13,100 [IPC Server handler 29 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
> starting
> 2012-10-01 19:54:13,101 [IPC Server handler 30 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
> starting
> 2012-10-01 19:54:13,101 [IPC Server handler 31 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
> starting
> 2012-10-01 19:54:13,101 [IPC Server handler 32 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
> starting
> 2012-10-01 19:54:13,101 [IPC Server handler 33 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
> starting
> 2012-10-01 19:54:13,101 [IPC Server handler 34 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 35 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 36 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 37 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 38 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 39 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 40 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 41 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
> starting

> 2012-10-01 19:54:13,103 [IPC Server handler 42 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 43 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 44 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 45 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 46 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 47 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 48 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 49 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 50 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 51 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 52 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 53 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 54 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 55 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 56 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 57 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 58 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 59 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 60 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 61 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 62 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 63 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 64 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 65 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 66 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 67 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 68 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 69 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 70 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 71 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 72 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 73 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 74 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 75 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 76 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 77 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 78 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 79 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 80 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 81 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 82 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 83 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 84 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 85 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 86 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 87 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 88 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 89 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 90 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 91 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 92 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 93 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 94 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 95 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
> starting
> 2012-10-01 19:54:13,110 [IPC Server handler 96 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
> starting
> 2012-10-01 19:54:13,110 [IPC Server handler 97 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
> starting
> 2012-10-01 19:54:13,110 [IPC Server handler 98 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
> starting
> 2012-10-01 19:54:13,110 [IPC Server handler 99 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
> starting
> 2012-10-01 19:54:13,110 [PRI IPC Server handler 0 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
> starting
> 2012-10-01 19:54:13,110 [PRI IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
> starting
> 2012-10-01 19:54:13,110 [PRI IPC Server handler 2 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 3 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 4 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 5 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 6 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 7 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 8 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 9 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
> starting
> 2012-10-01 19:54:13,124 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as
> data3024.ngpipes.milp.ngmoco.com,60020,1349121252040, RPC listening on
> data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020,
> sessionid=0x137ec64373dd4b5
> 2012-10-01 19:54:13,124
> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1349121252040]
> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
> SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1349121252040
> starting
> 2012-10-01 19:54:13,125 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered
> RegionServer MXBean
>
> GC log
> ======
>
> 1.914: [GC 1.914: [ParNew: 99976K->7646K(118016K), 0.0087130 secs]
> 99976K->7646K(123328K), 0.0088110 secs] [Times: user=0.07 sys=0.00,
> real=0.00 secs]
> 416.341: [GC 416.341: [ParNew: 112558K->12169K(118016K), 0.0447760
> secs] 112558K->25025K(133576K), 0.0450080 secs] [Times: user=0.13
> sys=0.02, real=0.05 secs]
> 416.386: [GC [1 CMS-initial-mark: 12855K(15560K)] 25089K(133576K),
> 0.0037570 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 416.390: [CMS-concurrent-mark-start]
> 416.407: [CMS-concurrent-mark: 0.015/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 416.407: [CMS-concurrent-preclean-start]
> 416.408: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 416.408: [GC[YG occupancy: 12233 K (118016 K)]416.408: [Rescan
> (parallel) , 0.0074970 secs]416.416: [weak refs processing, 0.0000370
> secs] [1 CMS-remark: 12855K(15560K)] 25089K(133576K), 0.0076480 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 416.416: [CMS-concurrent-sweep-start]
> 416.419: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 416.419: [CMS-concurrent-reset-start]
> 416.467: [CMS-concurrent-reset: 0.049/0.049 secs] [Times: user=0.01
> sys=0.04, real=0.05 secs]
> 418.468: [GC [1 CMS-initial-mark: 12855K(21428K)] 26216K(139444K),
> 0.0037020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 418.471: [CMS-concurrent-mark-start]
> 418.487: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 418.487: [CMS-concurrent-preclean-start]
> 418.488: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 418.488: [GC[YG occupancy: 13360 K (118016 K)]418.488: [Rescan
> (parallel) , 0.0090770 secs]418.497: [weak refs processing, 0.0000170
> secs] [1 CMS-remark: 12855K(21428K)] 26216K(139444K), 0.0092220 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 418.497: [CMS-concurrent-sweep-start]
> 418.500: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 418.500: [CMS-concurrent-reset-start]
> 418.511: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 420.512: [GC [1 CMS-initial-mark: 12854K(21428K)] 26344K(139444K),
> 0.0041050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 420.516: [CMS-concurrent-mark-start]
> 420.532: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
> sys=0.01, real=0.01 secs]
> 420.532: [CMS-concurrent-preclean-start]
> 420.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 420.533: [GC[YG occupancy: 13489 K (118016 K)]420.533: [Rescan
> (parallel) , 0.0014850 secs]420.534: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 12854K(21428K)] 26344K(139444K), 0.0015920 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 420.534: [CMS-concurrent-sweep-start]
> 420.537: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 420.537: [CMS-concurrent-reset-start]
> 420.548: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 422.437: [GC [1 CMS-initial-mark: 12854K(21428K)] 28692K(139444K),
> 0.0051030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 422.443: [CMS-concurrent-mark-start]
> 422.458: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 422.458: [CMS-concurrent-preclean-start]
> 422.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 422.458: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 427.541:
> [CMS-concurrent-abortable-preclean: 0.678/5.083 secs] [Times:
> user=0.66 sys=0.00, real=5.08 secs]
> 427.541: [GC[YG occupancy: 16198 K (118016 K)]427.541: [Rescan
> (parallel) , 0.0013750 secs]427.543: [weak refs processing, 0.0000140
> secs] [1 CMS-remark: 12854K(21428K)] 29053K(139444K), 0.0014800 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 427.543: [CMS-concurrent-sweep-start]
> 427.544: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 427.544: [CMS-concurrent-reset-start]
> 427.557: [CMS-concurrent-reset: 0.013/0.013 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 429.557: [GC [1 CMS-initial-mark: 12854K(21428K)] 30590K(139444K),
> 0.0043280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 429.562: [CMS-concurrent-mark-start]
> 429.574: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 429.574: [CMS-concurrent-preclean-start]
> 429.575: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 429.575: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 434.626:
> [CMS-concurrent-abortable-preclean: 0.747/5.051 secs] [Times:
> user=0.74 sys=0.00, real=5.05 secs]
> 434.626: [GC[YG occupancy: 18154 K (118016 K)]434.626: [Rescan
> (parallel) , 0.0015440 secs]434.627: [weak refs processing, 0.0000140
> secs] [1 CMS-remark: 12854K(21428K)] 31009K(139444K), 0.0016500 secs]
> [Times: user=0.00 sys=0.00, real=0.00 secs]
> 434.628: [CMS-concurrent-sweep-start]
> 434.629: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 434.629: [CMS-concurrent-reset-start]
> 434.641: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 436.641: [GC [1 CMS-initial-mark: 12854K(21428K)] 31137K(139444K),
> 0.0043440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 436.646: [CMS-concurrent-mark-start]
> 436.660: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 436.660: [CMS-concurrent-preclean-start]
> 436.661: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 436.661: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 441.773:
> [CMS-concurrent-abortable-preclean: 0.608/5.112 secs] [Times:
> user=0.60 sys=0.00, real=5.11 secs]
> 441.773: [GC[YG occupancy: 18603 K (118016 K)]441.773: [Rescan
> (parallel) , 0.0024270 secs]441.776: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12854K(21428K)] 31458K(139444K), 0.0025200 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 441.776: [CMS-concurrent-sweep-start]
> 441.777: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 441.777: [CMS-concurrent-reset-start]
> 441.788: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 443.788: [GC [1 CMS-initial-mark: 12854K(21428K)] 31586K(139444K),
> 0.0044590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 443.793: [CMS-concurrent-mark-start]
> 443.804: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.04
> sys=0.00, real=0.02 secs]
> 443.804: [CMS-concurrent-preclean-start]
> 443.805: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 443.805: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 448.821:
> [CMS-concurrent-abortable-preclean: 0.813/5.016 secs] [Times:
> user=0.81 sys=0.00, real=5.01 secs]
> 448.822: [GC[YG occupancy: 19052 K (118016 K)]448.822: [Rescan
> (parallel) , 0.0013990 secs]448.823: [weak refs processing, 0.0000140
> secs] [1 CMS-remark: 12854K(21428K)] 31907K(139444K), 0.0015040 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 448.823: [CMS-concurrent-sweep-start]
> 448.825: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 448.825: [CMS-concurrent-reset-start]
> 448.837: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 450.837: [GC [1 CMS-initial-mark: 12854K(21428K)] 32035K(139444K),
> 0.0044510 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 450.842: [CMS-concurrent-mark-start]
> 450.857: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 450.857: [CMS-concurrent-preclean-start]
> 450.858: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 450.858: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 455.922:
> [CMS-concurrent-abortable-preclean: 0.726/5.064 secs] [Times:
> user=0.73 sys=0.00, real=5.06 secs]
> 455.922: [GC[YG occupancy: 19542 K (118016 K)]455.922: [Rescan
> (parallel) , 0.0016050 secs]455.924: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12854K(21428K)] 32397K(139444K), 0.0017340 secs]
> [Times: user=0.02 sys=0.00, real=0.01 secs]
> 455.924: [CMS-concurrent-sweep-start]
> 455.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 455.927: [CMS-concurrent-reset-start]
> 455.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 457.936: [GC [1 CMS-initial-mark: 12854K(21428K)] 32525K(139444K),
> 0.0026740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 457.939: [CMS-concurrent-mark-start]
> 457.950: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 457.950: [CMS-concurrent-preclean-start]
> 457.950: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 457.950: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 463.065:
> [CMS-concurrent-abortable-preclean: 0.708/5.115 secs] [Times:
> user=0.71 sys=0.00, real=5.12 secs]
> 463.066: [GC[YG occupancy: 19991 K (118016 K)]463.066: [Rescan
> (parallel) , 0.0013940 secs]463.067: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12854K(21428K)] 32846K(139444K), 0.0015000 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 463.067: [CMS-concurrent-sweep-start]
> 463.070: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 463.070: [CMS-concurrent-reset-start]
> 463.080: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 465.080: [GC [1 CMS-initial-mark: 12854K(21428K)] 32974K(139444K),
> 0.0027070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 465.083: [CMS-concurrent-mark-start]
> 465.096: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 465.096: [CMS-concurrent-preclean-start]
> 465.096: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 465.096: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 470.123:
> [CMS-concurrent-abortable-preclean: 0.723/5.027 secs] [Times:
> user=0.71 sys=0.00, real=5.03 secs]
> 470.124: [GC[YG occupancy: 20440 K (118016 K)]470.124: [Rescan
> (parallel) , 0.0011990 secs]470.125: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 12854K(21428K)] 33295K(139444K), 0.0012990 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 470.125: [CMS-concurrent-sweep-start]
> 470.127: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 470.127: [CMS-concurrent-reset-start]
> 470.137: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 472.137: [GC [1 CMS-initial-mark: 12854K(21428K)] 33423K(139444K),
> 0.0041330 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 472.141: [CMS-concurrent-mark-start]
> 472.155: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 472.155: [CMS-concurrent-preclean-start]
> 472.156: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 472.156: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 477.179:
> [CMS-concurrent-abortable-preclean: 0.618/5.023 secs] [Times:
> user=0.62 sys=0.00, real=5.02 secs]
> 477.179: [GC[YG occupancy: 20889 K (118016 K)]477.179: [Rescan
> (parallel) , 0.0014510 secs]477.180: [weak refs processing, 0.0000090
> secs] [1 CMS-remark: 12854K(21428K)] 33744K(139444K), 0.0015250 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 477.181: [CMS-concurrent-sweep-start]
> 477.183: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 477.183: [CMS-concurrent-reset-start]
> 477.192: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 479.192: [GC [1 CMS-initial-mark: 12854K(21428K)] 33872K(139444K),
> 0.0039730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 479.196: [CMS-concurrent-mark-start]
> 479.209: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 479.209: [CMS-concurrent-preclean-start]
> 479.210: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 479.210: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 484.295:
> [CMS-concurrent-abortable-preclean: 0.757/5.085 secs] [Times:
> user=0.77 sys=0.00, real=5.09 secs]
> 484.295: [GC[YG occupancy: 21583 K (118016 K)]484.295: [Rescan
> (parallel) , 0.0013210 secs]484.297: [weak refs processing, 0.0000150
> secs] [1 CMS-remark: 12854K(21428K)] 34438K(139444K), 0.0014200 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 484.297: [CMS-concurrent-sweep-start]
> 484.298: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 484.298: [CMS-concurrent-reset-start]
> 484.307: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 486.308: [GC [1 CMS-initial-mark: 12854K(21428K)] 34566K(139444K),
> 0.0041800 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 486.312: [CMS-concurrent-mark-start]
> 486.324: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 486.324: [CMS-concurrent-preclean-start]
> 486.324: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 486.324: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 491.394:
> [CMS-concurrent-abortable-preclean: 0.565/5.070 secs] [Times:
> user=0.56 sys=0.00, real=5.06 secs]
> 491.394: [GC[YG occupancy: 22032 K (118016 K)]491.395: [Rescan
> (parallel) , 0.0018030 secs]491.396: [weak refs processing, 0.0000090
> secs] [1 CMS-remark: 12854K(21428K)] 34887K(139444K), 0.0018830 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 491.397: [CMS-concurrent-sweep-start]
> 491.398: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 491.398: [CMS-concurrent-reset-start]
> 491.406: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 493.407: [GC [1 CMS-initial-mark: 12854K(21428K)] 35080K(139444K),
> 0.0027620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 493.410: [CMS-concurrent-mark-start]
> 493.420: [CMS-concurrent-mark: 0.010/0.010 secs] [Times: user=0.04
> sys=0.00, real=0.01 secs]
> 493.420: [CMS-concurrent-preclean-start]
> 493.420: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 493.420: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 498.525:
> [CMS-concurrent-abortable-preclean: 0.600/5.106 secs] [Times:
> user=0.61 sys=0.00, real=5.11 secs]
> 498.526: [GC[YG occupancy: 22545 K (118016 K)]498.526: [Rescan
> (parallel) , 0.0019450 secs]498.528: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12854K(21428K)] 35400K(139444K), 0.0020460 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 498.528: [CMS-concurrent-sweep-start]
> 498.530: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 498.530: [CMS-concurrent-reset-start]
> 498.538: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 500.538: [GC [1 CMS-initial-mark: 12854K(21428K)] 35529K(139444K),
> 0.0027790 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 500.541: [CMS-concurrent-mark-start]
> 500.554: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 500.554: [CMS-concurrent-preclean-start]
> 500.554: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 500.554: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 505.616:
> [CMS-concurrent-abortable-preclean: 0.557/5.062 secs] [Times:
> user=0.56 sys=0.00, real=5.06 secs]
> 505.617: [GC[YG occupancy: 22995 K (118016 K)]505.617: [Rescan
> (parallel) , 0.0023440 secs]505.619: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12854K(21428K)] 35850K(139444K), 0.0024280 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 505.619: [CMS-concurrent-sweep-start]
> 505.621: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 505.621: [CMS-concurrent-reset-start]
> 505.629: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 507.630: [GC [1 CMS-initial-mark: 12854K(21428K)] 35978K(139444K),
> 0.0027500 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 507.632: [CMS-concurrent-mark-start]
> 507.645: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 507.645: [CMS-concurrent-preclean-start]
> 507.646: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 507.646: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 512.697:
> [CMS-concurrent-abortable-preclean: 0.562/5.051 secs] [Times:
> user=0.57 sys=0.00, real=5.05 secs]
> 512.697: [GC[YG occupancy: 23484 K (118016 K)]512.697: [Rescan
> (parallel) , 0.0020030 secs]512.699: [weak refs processing, 0.0000090
> secs] [1 CMS-remark: 12854K(21428K)] 36339K(139444K), 0.0020830 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 512.700: [CMS-concurrent-sweep-start]
> 512.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 512.701: [CMS-concurrent-reset-start]
> 512.709: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 514.710: [GC [1 CMS-initial-mark: 12854K(21428K)] 36468K(139444K),
> 0.0028400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 514.713: [CMS-concurrent-mark-start]
> 514.725: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 514.725: [CMS-concurrent-preclean-start]
> 514.725: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 514.725: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 519.800:
> [CMS-concurrent-abortable-preclean: 0.619/5.075 secs] [Times:
> user=0.66 sys=0.00, real=5.07 secs]
> 519.801: [GC[YG occupancy: 25022 K (118016 K)]519.801: [Rescan
> (parallel) , 0.0023950 secs]519.803: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12854K(21428K)] 37877K(139444K), 0.0024980 secs]
> [Times: user=0.02 sys=0.00, real=0.01 secs]
> 519.803: [CMS-concurrent-sweep-start]
> 519.805: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 519.805: [CMS-concurrent-reset-start]
> 519.813: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 521.814: [GC [1 CMS-initial-mark: 12854K(21428K)] 38005K(139444K),
> 0.0045520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 521.818: [CMS-concurrent-mark-start]
> 521.833: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 521.833: [CMS-concurrent-preclean-start]
> 521.833: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 521.833: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 526.840:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 526.840: [GC[YG occupancy: 25471 K (118016 K)]526.840: [Rescan
> (parallel) , 0.0024440 secs]526.843: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12854K(21428K)] 38326K(139444K), 0.0025440 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 526.843: [CMS-concurrent-sweep-start]
> 526.845: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 526.845: [CMS-concurrent-reset-start]
> 526.853: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 528.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 38449K(139444K),
> 0.0045550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 528.858: [CMS-concurrent-mark-start]
> 528.872: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 528.872: [CMS-concurrent-preclean-start]
> 528.873: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 528.873: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 533.876:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 533.876: [GC[YG occupancy: 25919 K (118016 K)]533.877: [Rescan
> (parallel) , 0.0028370 secs]533.879: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 38769K(139444K), 0.0029390 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 533.880: [CMS-concurrent-sweep-start]
> 533.882: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 533.882: [CMS-concurrent-reset-start]
> 533.891: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 535.891: [GC [1 CMS-initial-mark: 12849K(21428K)] 38897K(139444K),
> 0.0046460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 535.896: [CMS-concurrent-mark-start]
> 535.910: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 535.910: [CMS-concurrent-preclean-start]
> 535.911: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 535.911: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 540.917:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 540.917: [GC[YG occupancy: 26367 K (118016 K)]540.917: [Rescan
> (parallel) , 0.0025680 secs]540.920: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 39217K(139444K), 0.0026690 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 540.920: [CMS-concurrent-sweep-start]
> 540.922: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 540.922: [CMS-concurrent-reset-start]
> 540.930: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 542.466: [GC [1 CMS-initial-mark: 12849K(21428K)] 39555K(139444K),
> 0.0050040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 542.471: [CMS-concurrent-mark-start]
> 542.486: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 542.486: [CMS-concurrent-preclean-start]
> 542.486: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 542.486: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 547.491:
> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 547.491: [GC[YG occupancy: 27066 K (118016 K)]547.491: [Rescan
> (parallel) , 0.0024720 secs]547.494: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 39916K(139444K), 0.0025720 secs]
> [Times: user=0.02 sys=0.00, real=0.01 secs]
> 547.494: [CMS-concurrent-sweep-start]
> 547.496: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 547.496: [CMS-concurrent-reset-start]
> 547.505: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 549.506: [GC [1 CMS-initial-mark: 12849K(21428K)] 40044K(139444K),
> 0.0048760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 549.511: [CMS-concurrent-mark-start]
> 549.524: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 549.524: [CMS-concurrent-preclean-start]
> 549.525: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 549.525: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 554.530:
> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 554.530: [GC[YG occupancy: 27515 K (118016 K)]554.530: [Rescan
> (parallel) , 0.0025270 secs]554.533: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 40364K(139444K), 0.0026190 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 554.533: [CMS-concurrent-sweep-start]
> 554.534: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 554.534: [CMS-concurrent-reset-start]
> 554.542: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 556.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 40493K(139444K),
> 0.0048950 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 556.548: [CMS-concurrent-mark-start]
> 556.562: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 556.562: [CMS-concurrent-preclean-start]
> 556.562: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 556.563: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 561.565:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 561.566: [GC[YG occupancy: 27963 K (118016 K)]561.566: [Rescan
> (parallel) , 0.0025900 secs]561.568: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 40813K(139444K), 0.0026910 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 561.569: [CMS-concurrent-sweep-start]
> 561.570: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 561.570: [CMS-concurrent-reset-start]
> 561.578: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 563.579: [GC [1 CMS-initial-mark: 12849K(21428K)] 40941K(139444K),
> 0.0049390 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 563.584: [CMS-concurrent-mark-start]
> 563.598: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 563.598: [CMS-concurrent-preclean-start]
> 563.598: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 563.598: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 568.693:
> [CMS-concurrent-abortable-preclean: 0.717/5.095 secs] [Times:
> user=0.71 sys=0.00, real=5.09 secs]
> 568.694: [GC[YG occupancy: 28411 K (118016 K)]568.694: [Rescan
> (parallel) , 0.0035750 secs]568.697: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 41261K(139444K), 0.0036740 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 568.698: [CMS-concurrent-sweep-start]
> 568.700: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 568.700: [CMS-concurrent-reset-start]
> 568.709: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 570.709: [GC [1 CMS-initial-mark: 12849K(21428K)] 41389K(139444K),
> 0.0048710 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 570.714: [CMS-concurrent-mark-start]
> 570.729: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 570.729: [CMS-concurrent-preclean-start]
> 570.729: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 570.729: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 575.738:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 575.738: [GC[YG occupancy: 28900 K (118016 K)]575.738: [Rescan
> (parallel) , 0.0036390 secs]575.742: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 41750K(139444K), 0.0037440 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 575.742: [CMS-concurrent-sweep-start]
> 575.744: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 575.744: [CMS-concurrent-reset-start]
> 575.752: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 577.752: [GC [1 CMS-initial-mark: 12849K(21428K)] 41878K(139444K),
> 0.0050100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 577.758: [CMS-concurrent-mark-start]
> 577.772: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 577.772: [CMS-concurrent-preclean-start]
> 577.773: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 577.773: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 582.779:
> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 582.779: [GC[YG occupancy: 29348 K (118016 K)]582.779: [Rescan
> (parallel) , 0.0026100 secs]582.782: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 42198K(139444K), 0.0027110 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 582.782: [CMS-concurrent-sweep-start]
> 582.784: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 582.784: [CMS-concurrent-reset-start]
> 582.792: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 584.792: [GC [1 CMS-initial-mark: 12849K(21428K)] 42326K(139444K),
> 0.0050510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 584.798: [CMS-concurrent-mark-start]
> 584.812: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 584.812: [CMS-concurrent-preclean-start]
> 584.813: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 584.813: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 589.819:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 589.819: [GC[YG occupancy: 29797 K (118016 K)]589.819: [Rescan
> (parallel) , 0.0039510 secs]589.823: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 42647K(139444K), 0.0040460 secs]
> [Times: user=0.03 sys=0.00, real=0.01 secs]
> 589.824: [CMS-concurrent-sweep-start]
> 589.826: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 589.826: [CMS-concurrent-reset-start]
> 589.835: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 591.835: [GC [1 CMS-initial-mark: 12849K(21428K)] 42775K(139444K),
> 0.0050090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 591.840: [CMS-concurrent-mark-start]
> 591.855: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 591.855: [CMS-concurrent-preclean-start]
> 591.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 591.855: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 596.857:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 596.857: [GC[YG occupancy: 31414 K (118016 K)]596.857: [Rescan
> (parallel) , 0.0028500 secs]596.860: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 44264K(139444K), 0.0029480 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 596.861: [CMS-concurrent-sweep-start]
> 596.862: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 596.862: [CMS-concurrent-reset-start]
> 596.870: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 598.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 44392K(139444K),
> 0.0050640 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 598.876: [CMS-concurrent-mark-start]
> 598.890: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 598.890: [CMS-concurrent-preclean-start]
> 598.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 598.891: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 603.897:
> [CMS-concurrent-abortable-preclean: 0.705/5.007 secs] [Times:
> user=0.72 sys=0.00, real=5.01 secs]
> 603.898: [GC[YG occupancy: 32032 K (118016 K)]603.898: [Rescan
> (parallel) , 0.0039660 secs]603.902: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 44882K(139444K), 0.0040680 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 603.902: [CMS-concurrent-sweep-start]
> 603.903: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 603.903: [CMS-concurrent-reset-start]
> 603.912: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 605.912: [GC [1 CMS-initial-mark: 12849K(21428K)] 45010K(139444K),
> 0.0053650 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 605.918: [CMS-concurrent-mark-start]
> 605.932: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 605.932: [CMS-concurrent-preclean-start]
> 605.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 605.932: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 610.939:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 610.940: [GC[YG occupancy: 32481 K (118016 K)]610.940: [Rescan
> (parallel) , 0.0032540 secs]610.943: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 45330K(139444K), 0.0033560 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 610.943: [CMS-concurrent-sweep-start]
> 610.944: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 610.945: [CMS-concurrent-reset-start]
> 610.953: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 612.486: [GC [1 CMS-initial-mark: 12849K(21428K)] 45459K(139444K),
> 0.0055070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 612.492: [CMS-concurrent-mark-start]
> 612.505: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 612.505: [CMS-concurrent-preclean-start]
> 612.506: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 612.506: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 617.511:
> [CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 617.512: [GC[YG occupancy: 32929 K (118016 K)]617.512: [Rescan
> (parallel) , 0.0037500 secs]617.516: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 45779K(139444K), 0.0038560 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 617.516: [CMS-concurrent-sweep-start]
> 617.518: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 617.518: [CMS-concurrent-reset-start]
> 617.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 619.528: [GC [1 CMS-initial-mark: 12849K(21428K)] 45907K(139444K),
> 0.0053320 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 619.533: [CMS-concurrent-mark-start]
> 619.546: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 619.546: [CMS-concurrent-preclean-start]
> 619.547: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 619.547: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 624.552:
> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 624.552: [GC[YG occupancy: 33377 K (118016 K)]624.552: [Rescan
> (parallel) , 0.0037290 secs]624.556: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 12849K(21428K)] 46227K(139444K), 0.0038330 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 624.556: [CMS-concurrent-sweep-start]
> 624.558: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 624.558: [CMS-concurrent-reset-start]
> 624.568: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 626.568: [GC [1 CMS-initial-mark: 12849K(21428K)] 46355K(139444K),
> 0.0054240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 626.574: [CMS-concurrent-mark-start]
> 626.588: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 626.588: [CMS-concurrent-preclean-start]
> 626.588: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 626.588: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 631.592:
> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 631.592: [GC[YG occupancy: 33825 K (118016 K)]631.593: [Rescan
> (parallel) , 0.0041600 secs]631.597: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 46675K(139444K), 0.0042650 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 631.597: [CMS-concurrent-sweep-start]
> 631.598: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 631.598: [CMS-concurrent-reset-start]
> 631.607: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 632.495: [GC [1 CMS-initial-mark: 12849K(21428K)] 46839K(139444K),
> 0.0054380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 632.501: [CMS-concurrent-mark-start]
> 632.516: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 632.516: [CMS-concurrent-preclean-start]
> 632.517: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 632.517: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 637.519:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 637.519: [GC[YG occupancy: 34350 K (118016 K)]637.519: [Rescan
> (parallel) , 0.0025310 secs]637.522: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 47200K(139444K), 0.0026540 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 637.522: [CMS-concurrent-sweep-start]
> 637.523: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 637.523: [CMS-concurrent-reset-start]
> 637.532: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 639.532: [GC [1 CMS-initial-mark: 12849K(21428K)] 47328K(139444K),
> 0.0055330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 639.538: [CMS-concurrent-mark-start]
> 639.551: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 639.551: [CMS-concurrent-preclean-start]
> 639.552: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 639.552: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 644.561:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 644.561: [GC[YG occupancy: 34798 K (118016 K)]644.561: [Rescan
> (parallel) , 0.0040620 secs]644.565: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 47648K(139444K), 0.0041610 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 644.566: [CMS-concurrent-sweep-start]
> 644.568: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 644.568: [CMS-concurrent-reset-start]
> 644.577: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 646.577: [GC [1 CMS-initial-mark: 12849K(21428K)] 47776K(139444K),
> 0.0054660 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 646.583: [CMS-concurrent-mark-start]
> 646.596: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 646.596: [CMS-concurrent-preclean-start]
> 646.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 646.597: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 651.678:
> [CMS-concurrent-abortable-preclean: 0.732/5.081 secs] [Times:
> user=0.74 sys=0.00, real=5.08 secs]
> 651.678: [GC[YG occupancy: 35246 K (118016 K)]651.678: [Rescan
> (parallel) , 0.0025920 secs]651.681: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 48096K(139444K), 0.0026910 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 651.681: [CMS-concurrent-sweep-start]
> 651.682: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 651.682: [CMS-concurrent-reset-start]
> 651.690: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 653.691: [GC [1 CMS-initial-mark: 12849K(21428K)] 48224K(139444K),
> 0.0055640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 653.696: [CMS-concurrent-mark-start]
> 653.711: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 653.711: [CMS-concurrent-preclean-start]
> 653.711: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 653.711: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 658.721:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 658.721: [GC[YG occupancy: 35695 K (118016 K)]658.721: [Rescan
> (parallel) , 0.0040160 secs]658.725: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 48545K(139444K), 0.0041130 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 658.725: [CMS-concurrent-sweep-start]
> 658.727: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 658.728: [CMS-concurrent-reset-start]
> 658.737: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 660.737: [GC [1 CMS-initial-mark: 12849K(21428K)] 48673K(139444K),
> 0.0055230 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 660.743: [CMS-concurrent-mark-start]
> 660.756: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 660.756: [CMS-concurrent-preclean-start]
> 660.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 660.757: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 665.767:
> [CMS-concurrent-abortable-preclean: 0.704/5.011 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 665.768: [GC[YG occupancy: 36289 K (118016 K)]665.768: [Rescan
> (parallel) , 0.0033040 secs]665.771: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 49139K(139444K), 0.0034090 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 665.771: [CMS-concurrent-sweep-start]
> 665.773: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 665.773: [CMS-concurrent-reset-start]
> 665.781: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 667.781: [GC [1 CMS-initial-mark: 12849K(21428K)] 49267K(139444K),
> 0.0057830 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 667.787: [CMS-concurrent-mark-start]
> 667.802: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 667.802: [CMS-concurrent-preclean-start]
> 667.802: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 667.802: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 672.809:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 672.810: [GC[YG occupancy: 36737 K (118016 K)]672.810: [Rescan
> (parallel) , 0.0037010 secs]672.813: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 49587K(139444K), 0.0038010 secs]
> [Times: user=0.03 sys=0.00, real=0.01 secs]
> 672.814: [CMS-concurrent-sweep-start]
> 672.815: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 672.815: [CMS-concurrent-reset-start]
> 672.824: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 674.824: [GC [1 CMS-initial-mark: 12849K(21428K)] 49715K(139444K),
> 0.0058920 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
> 674.830: [CMS-concurrent-mark-start]
> 674.845: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 674.845: [CMS-concurrent-preclean-start]
> 674.845: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 674.845: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 679.849:
> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 679.850: [GC[YG occupancy: 37185 K (118016 K)]679.850: [Rescan
> (parallel) , 0.0033420 secs]679.853: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 50035K(139444K), 0.0034440 secs]
> [Times: user=0.02 sys=0.00, real=0.01 secs]
> 679.853: [CMS-concurrent-sweep-start]
> 679.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 679.855: [CMS-concurrent-reset-start]
> 679.863: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 681.864: [GC [1 CMS-initial-mark: 12849K(21428K)] 50163K(139444K),
> 0.0058780 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 681.870: [CMS-concurrent-mark-start]
> 681.884: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 681.884: [CMS-concurrent-preclean-start]
> 681.884: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 681.884: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 686.890:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 686.891: [GC[YG occupancy: 37634 K (118016 K)]686.891: [Rescan
> (parallel) , 0.0044480 secs]686.895: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 50483K(139444K), 0.0045570 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 686.896: [CMS-concurrent-sweep-start]
> 686.897: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 686.897: [CMS-concurrent-reset-start]
> 686.905: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 688.905: [GC [1 CMS-initial-mark: 12849K(21428K)] 50612K(139444K),
> 0.0058940 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 688.911: [CMS-concurrent-mark-start]
> 688.925: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 688.925: [CMS-concurrent-preclean-start]
> 688.925: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 688.926: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 694.041:
> [CMS-concurrent-abortable-preclean: 0.718/5.115 secs] [Times:
> user=0.72 sys=0.00, real=5.11 secs]
> 694.041: [GC[YG occupancy: 38122 K (118016 K)]694.041: [Rescan
> (parallel) , 0.0028640 secs]694.044: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 50972K(139444K), 0.0029660 secs]
> [Times: user=0.03 sys=0.00, real=0.01 secs]
> 694.044: [CMS-concurrent-sweep-start]
> 694.046: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 694.046: [CMS-concurrent-reset-start]
> 694.054: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 696.054: [GC [1 CMS-initial-mark: 12849K(21428K)] 51100K(139444K),
> 0.0060550 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 696.060: [CMS-concurrent-mark-start]
> 696.074: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 696.074: [CMS-concurrent-preclean-start]
> 696.075: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 696.075: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 701.078:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 701.079: [GC[YG occupancy: 38571 K (118016 K)]701.079: [Rescan
> (parallel) , 0.0064210 secs]701.085: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 51421K(139444K), 0.0065220 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 701.085: [CMS-concurrent-sweep-start]
> 701.087: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 701.088: [CMS-concurrent-reset-start]
> 701.097: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 703.097: [GC [1 CMS-initial-mark: 12849K(21428K)] 51549K(139444K),
> 0.0058470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 703.103: [CMS-concurrent-mark-start]
> 703.116: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 703.116: [CMS-concurrent-preclean-start]
> 703.117: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 703.117: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 708.125:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 708.125: [GC[YG occupancy: 39054 K (118016 K)]708.125: [Rescan
> (parallel) , 0.0037190 secs]708.129: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 51904K(139444K), 0.0038220 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 708.129: [CMS-concurrent-sweep-start]
> 708.131: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 708.131: [CMS-concurrent-reset-start]
> 708.139: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 710.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 52032K(139444K),
> 0.0059770 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 710.145: [CMS-concurrent-mark-start]
> 710.158: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 710.158: [CMS-concurrent-preclean-start]
> 710.158: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 710.158: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 715.169:
> [CMS-concurrent-abortable-preclean: 0.705/5.011 secs] [Times:
> user=0.69 sys=0.01, real=5.01 secs]
> 715.169: [GC[YG occupancy: 39503 K (118016 K)]715.169: [Rescan
> (parallel) , 0.0042370 secs]715.173: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 52353K(139444K), 0.0043410 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 715.174: [CMS-concurrent-sweep-start]
> 715.176: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 715.176: [CMS-concurrent-reset-start]
> 715.185: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 717.185: [GC [1 CMS-initial-mark: 12849K(21428K)] 52481K(139444K),
> 0.0060050 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 717.191: [CMS-concurrent-mark-start]
> 717.205: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 717.205: [CMS-concurrent-preclean-start]
> 717.206: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 717.206: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 722.209:
> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
> user=0.71 sys=0.00, real=5.00 secs]
> 722.210: [GC[YG occupancy: 40161 K (118016 K)]722.210: [Rescan
> (parallel) , 0.0041630 secs]722.214: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 53011K(139444K), 0.0042630 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 722.214: [CMS-concurrent-sweep-start]
> 722.216: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 722.216: [CMS-concurrent-reset-start]
> 722.226: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 722.521: [GC [1 CMS-initial-mark: 12849K(21428K)] 53099K(139444K),
> 0.0062380 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 722.528: [CMS-concurrent-mark-start]
> 722.544: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.05
> sys=0.01, real=0.02 secs]
> 722.544: [CMS-concurrent-preclean-start]
> 722.544: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 722.544: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 727.558:
> [CMS-concurrent-abortable-preclean: 0.709/5.014 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 727.558: [GC[YG occupancy: 40610 K (118016 K)]727.558: [Rescan
> (parallel) , 0.0041700 secs]727.563: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 53460K(139444K), 0.0042780 secs]
> [Times: user=0.05 sys=0.00, real=0.00 secs]
> 727.563: [CMS-concurrent-sweep-start]
> 727.564: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 727.564: [CMS-concurrent-reset-start]
> 727.573: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.02 secs]
> 729.574: [GC [1 CMS-initial-mark: 12849K(21428K)] 53588K(139444K),
> 0.0062700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 729.580: [CMS-concurrent-mark-start]
> 729.595: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 729.595: [CMS-concurrent-preclean-start]
> 729.595: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 729.595: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 734.597:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 734.597: [GC[YG occupancy: 41058 K (118016 K)]734.597: [Rescan
> (parallel) , 0.0053870 secs]734.603: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 53908K(139444K), 0.0054870 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 734.603: [CMS-concurrent-sweep-start]
> 734.604: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 734.604: [CMS-concurrent-reset-start]
> 734.614: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 734.877: [GC [1 CMS-initial-mark: 12849K(21428K)] 53908K(139444K),
> 0.0067230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 734.884: [CMS-concurrent-mark-start]
> 734.899: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 734.899: [CMS-concurrent-preclean-start]
> 734.899: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 734.899: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 739.905:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 739.906: [GC[YG occupancy: 41379 K (118016 K)]739.906: [Rescan
> (parallel) , 0.0050680 secs]739.911: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 54228K(139444K), 0.0051690 secs]
> [Times: user=0.05 sys=0.00, real=0.00 secs]
> 739.911: [CMS-concurrent-sweep-start]
> 739.912: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 739.912: [CMS-concurrent-reset-start]
> 739.921: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 741.922: [GC [1 CMS-initial-mark: 12849K(21428K)] 54356K(139444K),
> 0.0062880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 741.928: [CMS-concurrent-mark-start]
> 741.942: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 741.942: [CMS-concurrent-preclean-start]
> 741.943: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 741.943: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 747.059:
> [CMS-concurrent-abortable-preclean: 0.711/5.117 secs] [Times:
> user=0.71 sys=0.00, real=5.12 secs]
> 747.060: [GC[YG occupancy: 41827 K (118016 K)]747.060: [Rescan
> (parallel) , 0.0051040 secs]747.065: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 54677K(139444K), 0.0052090 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 747.065: [CMS-concurrent-sweep-start]
> 747.067: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 747.067: [CMS-concurrent-reset-start]
> 747.075: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 749.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 54805K(139444K),
> 0.0063470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 749.082: [CMS-concurrent-mark-start]
> 749.095: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 749.095: [CMS-concurrent-preclean-start]
> 749.096: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 749.096: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 754.175:
> [CMS-concurrent-abortable-preclean: 0.718/5.079 secs] [Times:
> user=0.72 sys=0.00, real=5.08 secs]
> 754.175: [GC[YG occupancy: 42423 K (118016 K)]754.175: [Rescan
> (parallel) , 0.0051290 secs]754.180: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 55273K(139444K), 0.0052290 secs]
> [Times: user=0.05 sys=0.00, real=0.00 secs]
> 754.181: [CMS-concurrent-sweep-start]
> 754.182: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 754.182: [CMS-concurrent-reset-start]
> 754.191: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 756.191: [GC [1 CMS-initial-mark: 12849K(21428K)] 55401K(139444K),
> 0.0064020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 756.198: [CMS-concurrent-mark-start]
> 756.212: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 756.212: [CMS-concurrent-preclean-start]
> 756.213: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 756.213: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 761.217:
> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 761.218: [GC[YG occupancy: 42871 K (118016 K)]761.218: [Rescan
> (parallel) , 0.0052310 secs]761.223: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 55721K(139444K), 0.0053300 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 761.223: [CMS-concurrent-sweep-start]
> 761.225: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 761.225: [CMS-concurrent-reset-start]
> 761.234: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 763.234: [GC [1 CMS-initial-mark: 12849K(21428K)] 55849K(139444K),
> 0.0045400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 763.239: [CMS-concurrent-mark-start]
> 763.253: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 763.253: [CMS-concurrent-preclean-start]
> 763.253: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 763.253: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 768.348:
> [CMS-concurrent-abortable-preclean: 0.690/5.095 secs] [Times:
> user=0.69 sys=0.00, real=5.10 secs]
> 768.349: [GC[YG occupancy: 43320 K (118016 K)]768.349: [Rescan
> (parallel) , 0.0045140 secs]768.353: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 56169K(139444K), 0.0046170 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 768.353: [CMS-concurrent-sweep-start]
> 768.356: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 768.356: [CMS-concurrent-reset-start]
> 768.365: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 770.365: [GC [1 CMS-initial-mark: 12849K(21428K)] 56298K(139444K),
> 0.0063950 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 770.372: [CMS-concurrent-mark-start]
> 770.388: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 770.388: [CMS-concurrent-preclean-start]
> 770.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 770.388: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 775.400:
> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 775.401: [GC[YG occupancy: 43768 K (118016 K)]775.401: [Rescan
> (parallel) , 0.0043990 secs]775.405: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 56618K(139444K), 0.0045000 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 775.405: [CMS-concurrent-sweep-start]
> 775.407: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 775.407: [CMS-concurrent-reset-start]
> 775.417: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 777.417: [GC [1 CMS-initial-mark: 12849K(21428K)] 56746K(139444K),
> 0.0064580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 777.423: [CMS-concurrent-mark-start]
> 777.438: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 777.438: [CMS-concurrent-preclean-start]
> 777.439: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 777.439: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 782.448:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 782.448: [GC[YG occupancy: 44321 K (118016 K)]782.448: [Rescan
> (parallel) , 0.0054760 secs]782.454: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 57171K(139444K), 0.0055780 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 782.454: [CMS-concurrent-sweep-start]
> 782.455: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 782.455: [CMS-concurrent-reset-start]
> 782.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 782.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 57235K(139444K),
> 0.0066970 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 782.550: [CMS-concurrent-mark-start]
> 782.567: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 782.567: [CMS-concurrent-preclean-start]
> 782.568: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 782.568: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 787.574:
> [CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 787.574: [GC[YG occupancy: 44746 K (118016 K)]787.574: [Rescan
> (parallel) , 0.0049170 secs]787.579: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 57596K(139444K), 0.0050210 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 787.579: [CMS-concurrent-sweep-start]
> 787.581: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 787.581: [CMS-concurrent-reset-start]
> 787.590: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 789.591: [GC [1 CMS-initial-mark: 12849K(21428K)] 57724K(139444K),
> 0.0066850 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 789.598: [CMS-concurrent-mark-start]
> 789.614: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 789.614: [CMS-concurrent-preclean-start]
> 789.615: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 789.615: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 794.626:
> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 794.627: [GC[YG occupancy: 45195 K (118016 K)]794.627: [Rescan
> (parallel) , 0.0056520 secs]794.632: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 58044K(139444K), 0.0057510 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 794.632: [CMS-concurrent-sweep-start]
> 794.634: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 794.634: [CMS-concurrent-reset-start]
> 794.643: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 796.643: [GC [1 CMS-initial-mark: 12849K(21428K)] 58172K(139444K),
> 0.0067410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 796.650: [CMS-concurrent-mark-start]
> 796.666: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 796.666: [CMS-concurrent-preclean-start]
> 796.667: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 796.667: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 801.670:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 801.670: [GC[YG occupancy: 45643 K (118016 K)]801.670: [Rescan
> (parallel) , 0.0043550 secs]801.675: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 58493K(139444K), 0.0044580 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 801.675: [CMS-concurrent-sweep-start]
> 801.677: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 801.677: [CMS-concurrent-reset-start]
> 801.686: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 803.686: [GC [1 CMS-initial-mark: 12849K(21428K)] 58621K(139444K),
> 0.0067250 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 803.693: [CMS-concurrent-mark-start]
> 803.708: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 803.708: [CMS-concurrent-preclean-start]
> 803.709: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 803.709: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 808.717:
> [CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 808.717: [GC[YG occupancy: 46091 K (118016 K)]808.717: [Rescan
> (parallel) , 0.0034790 secs]808.720: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 58941K(139444K), 0.0035820 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 808.721: [CMS-concurrent-sweep-start]
> 808.722: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 808.722: [CMS-concurrent-reset-start]
> 808.730: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 810.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 59069K(139444K),
> 0.0067580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 810.738: [CMS-concurrent-mark-start]
> 810.755: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 810.755: [CMS-concurrent-preclean-start]
> 810.755: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 810.755: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 815.823:
> [CMS-concurrent-abortable-preclean: 0.715/5.068 secs] [Times:
> user=0.72 sys=0.00, real=5.06 secs]
> 815.824: [GC[YG occupancy: 46580 K (118016 K)]815.824: [Rescan
> (parallel) , 0.0048490 secs]815.829: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 59430K(139444K), 0.0049600 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 815.829: [CMS-concurrent-sweep-start]
> 815.831: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 815.831: [CMS-concurrent-reset-start]
> 815.840: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 817.840: [GC [1 CMS-initial-mark: 12849K(21428K)] 59558K(139444K),
> 0.0068880 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 817.847: [CMS-concurrent-mark-start]
> 817.864: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 817.864: [CMS-concurrent-preclean-start]
> 817.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 817.865: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 822.868:
> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 822.868: [GC[YG occupancy: 47028 K (118016 K)]822.868: [Rescan
> (parallel) , 0.0061120 secs]822.874: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 59878K(139444K), 0.0062150 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 822.874: [CMS-concurrent-sweep-start]
> 822.876: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 822.876: [CMS-concurrent-reset-start]
> 822.885: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 824.885: [GC [1 CMS-initial-mark: 12849K(21428K)] 60006K(139444K),
> 0.0068610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 824.892: [CMS-concurrent-mark-start]
> 824.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 824.908: [CMS-concurrent-preclean-start]
> 824.908: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 824.908: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 829.914:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 829.915: [GC[YG occupancy: 47477 K (118016 K)]829.915: [Rescan
> (parallel) , 0.0034890 secs]829.918: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 60327K(139444K), 0.0035930 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 829.918: [CMS-concurrent-sweep-start]
> 829.920: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 829.920: [CMS-concurrent-reset-start]
> 829.930: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 831.930: [GC [1 CMS-initial-mark: 12849K(21428K)] 60455K(139444K),
> 0.0069040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 831.937: [CMS-concurrent-mark-start]
> 831.953: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 831.953: [CMS-concurrent-preclean-start]
> 831.954: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 831.954: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 836.957:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.71 sys=0.00, real=5.00 secs]
> 836.957: [GC[YG occupancy: 47925 K (118016 K)]836.957: [Rescan
> (parallel) , 0.0060440 secs]836.963: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 60775K(139444K), 0.0061520 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 836.964: [CMS-concurrent-sweep-start]
> 836.965: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 836.965: [CMS-concurrent-reset-start]
> 836.974: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 838.974: [GC [1 CMS-initial-mark: 12849K(21428K)] 60903K(139444K),
> 0.0069860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 838.982: [CMS-concurrent-mark-start]
> 838.997: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 838.998: [CMS-concurrent-preclean-start]
> 838.998: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 838.998: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 844.091:
> [CMS-concurrent-abortable-preclean: 0.718/5.093 secs] [Times:
> user=0.72 sys=0.00, real=5.09 secs]
> 844.092: [GC[YG occupancy: 48731 K (118016 K)]844.092: [Rescan
> (parallel) , 0.0052610 secs]844.097: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 61581K(139444K), 0.0053620 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 844.097: [CMS-concurrent-sweep-start]
> 844.099: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 844.099: [CMS-concurrent-reset-start]
> 844.108: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 846.109: [GC [1 CMS-initial-mark: 12849K(21428K)] 61709K(139444K),
> 0.0071980 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 846.116: [CMS-concurrent-mark-start]
> 846.133: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 846.133: [CMS-concurrent-preclean-start]
> 846.134: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 846.134: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 851.137:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 851.137: [GC[YG occupancy: 49180 K (118016 K)]851.137: [Rescan
> (parallel) , 0.0061320 secs]851.143: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 62030K(139444K), 0.0062320 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 851.144: [CMS-concurrent-sweep-start]
> 851.145: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 851.145: [CMS-concurrent-reset-start]
> 851.154: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 853.154: [GC [1 CMS-initial-mark: 12849K(21428K)] 62158K(139444K),
> 0.0071610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 853.162: [CMS-concurrent-mark-start]
> 853.177: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 853.177: [CMS-concurrent-preclean-start]
> 853.178: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 853.178: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 858.181:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 858.181: [GC[YG occupancy: 49628 K (118016 K)]858.181: [Rescan
> (parallel) , 0.0029560 secs]858.184: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 62478K(139444K), 0.0030590 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 858.184: [CMS-concurrent-sweep-start]
> 858.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 858.186: [CMS-concurrent-reset-start]
> 858.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 860.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 62606K(139444K),
> 0.0072070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 860.203: [CMS-concurrent-mark-start]
> 860.219: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 860.219: [CMS-concurrent-preclean-start]
> 860.219: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 860.219: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 865.226:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 865.227: [GC[YG occupancy: 50076 K (118016 K)]865.227: [Rescan
> (parallel) , 0.0066610 secs]865.233: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 62926K(139444K), 0.0067670 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 865.233: [CMS-concurrent-sweep-start]
> 865.235: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 865.235: [CMS-concurrent-reset-start]
> 865.244: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 867.244: [GC [1 CMS-initial-mark: 12849K(21428K)] 63054K(139444K),
> 0.0072490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 867.252: [CMS-concurrent-mark-start]
> 867.267: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 867.267: [CMS-concurrent-preclean-start]
> 867.268: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 867.268: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 872.281:
> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 872.281: [GC[YG occupancy: 50525 K (118016 K)]872.281: [Rescan
> (parallel) , 0.0053780 secs]872.286: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 63375K(139444K), 0.0054790 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 872.287: [CMS-concurrent-sweep-start]
> 872.288: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 872.288: [CMS-concurrent-reset-start]
> 872.296: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 872.572: [GC [1 CMS-initial-mark: 12849K(21428K)] 63439K(139444K),
> 0.0073060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 872.580: [CMS-concurrent-mark-start]
> 872.597: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 872.597: [CMS-concurrent-preclean-start]
> 872.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 872.597: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 877.600:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 877.601: [GC[YG occupancy: 51049 K (118016 K)]877.601: [Rescan
> (parallel) , 0.0063070 secs]877.607: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 63899K(139444K), 0.0064090 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 877.607: [CMS-concurrent-sweep-start]
> 877.609: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 877.609: [CMS-concurrent-reset-start]
> 877.619: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 879.619: [GC [1 CMS-initial-mark: 12849K(21428K)] 64027K(139444K),
> 0.0073320 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 879.626: [CMS-concurrent-mark-start]
> 879.643: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 879.643: [CMS-concurrent-preclean-start]
> 879.644: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 879.644: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 884.657:
> [CMS-concurrent-abortable-preclean: 0.708/5.014 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 884.658: [GC[YG occupancy: 51497 K (118016 K)]884.658: [Rescan
> (parallel) , 0.0056160 secs]884.663: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 64347K(139444K), 0.0057150 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 884.663: [CMS-concurrent-sweep-start]
> 884.665: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 884.665: [CMS-concurrent-reset-start]
> 884.674: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 886.674: [GC [1 CMS-initial-mark: 12849K(21428K)] 64475K(139444K),
> 0.0073420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 886.682: [CMS-concurrent-mark-start]
> 886.698: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 886.698: [CMS-concurrent-preclean-start]
> 886.698: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 886.698: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 891.702:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 891.702: [GC[YG occupancy: 51945 K (118016 K)]891.702: [Rescan
> (parallel) , 0.0070120 secs]891.709: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 64795K(139444K), 0.0071150 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 891.709: [CMS-concurrent-sweep-start]
> 891.711: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 891.711: [CMS-concurrent-reset-start]
> 891.721: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 893.721: [GC [1 CMS-initial-mark: 12849K(21428K)] 64923K(139444K),
> 0.0073880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 893.728: [CMS-concurrent-mark-start]
> 893.745: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 893.745: [CMS-concurrent-preclean-start]
> 893.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 893.745: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 898.852:
> [CMS-concurrent-abortable-preclean: 0.715/5.107 secs] [Times:
> user=0.71 sys=0.00, real=5.10 secs]
> 898.853: [GC[YG occupancy: 53466 K (118016 K)]898.853: [Rescan
> (parallel) , 0.0060600 secs]898.859: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 66315K(139444K), 0.0061640 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 898.859: [CMS-concurrent-sweep-start]
> 898.861: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 898.861: [CMS-concurrent-reset-start]
> 898.870: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 900.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 66444K(139444K),
> 0.0074670 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 900.878: [CMS-concurrent-mark-start]
> 900.895: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 900.895: [CMS-concurrent-preclean-start]
> 900.896: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 900.896: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 905.969:
> [CMS-concurrent-abortable-preclean: 0.716/5.074 secs] [Times:
> user=0.72 sys=0.01, real=5.07 secs]
> 905.969: [GC[YG occupancy: 54157 K (118016 K)]905.970: [Rescan
> (parallel) , 0.0068200 secs]905.976: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 67007K(139444K), 0.0069250 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 905.977: [CMS-concurrent-sweep-start]
> 905.978: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 905.978: [CMS-concurrent-reset-start]
> 905.986: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 907.986: [GC [1 CMS-initial-mark: 12849K(21428K)] 67135K(139444K),
> 0.0076010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 907.994: [CMS-concurrent-mark-start]
> 908.009: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 908.009: [CMS-concurrent-preclean-start]
> 908.010: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 908.010: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 913.013:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 913.013: [GC[YG occupancy: 54606 K (118016 K)]913.013: [Rescan
> (parallel) , 0.0053650 secs]913.018: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 67455K(139444K), 0.0054650 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 913.019: [CMS-concurrent-sweep-start]
> 913.021: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 913.021: [CMS-concurrent-reset-start]
> 913.030: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 915.030: [GC [1 CMS-initial-mark: 12849K(21428K)] 67583K(139444K),
> 0.0076410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 915.038: [CMS-concurrent-mark-start]
> 915.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 915.055: [CMS-concurrent-preclean-start]
> 915.056: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 915.056: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 920.058:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 920.058: [GC[YG occupancy: 55054 K (118016 K)]920.058: [Rescan
> (parallel) , 0.0058380 secs]920.064: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 67904K(139444K), 0.0059420 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 920.064: [CMS-concurrent-sweep-start]
> 920.066: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 920.066: [CMS-concurrent-reset-start]
> 920.075: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.01, real=0.01 secs]
> 922.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 68032K(139444K),
> 0.0075820 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 922.083: [CMS-concurrent-mark-start]
> 922.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 922.098: [CMS-concurrent-preclean-start]
> 922.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 922.099: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 927.102:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 927.102: [GC[YG occupancy: 55502 K (118016 K)]927.102: [Rescan
> (parallel) , 0.0059190 secs]927.108: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 68352K(139444K), 0.0060220 secs]
> [Times: user=0.06 sys=0.01, real=0.01 secs]
> 927.108: [CMS-concurrent-sweep-start]
> 927.110: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 927.110: [CMS-concurrent-reset-start]
> 927.120: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 929.120: [GC [1 CMS-initial-mark: 12849K(21428K)] 68480K(139444K),
> 0.0077620 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 929.128: [CMS-concurrent-mark-start]
> 929.145: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 929.145: [CMS-concurrent-preclean-start]
> 929.145: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 929.145: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 934.237:
> [CMS-concurrent-abortable-preclean: 0.717/5.092 secs] [Times:
> user=0.72 sys=0.00, real=5.09 secs]
> 934.238: [GC[YG occupancy: 55991 K (118016 K)]934.238: [Rescan
> (parallel) , 0.0042660 secs]934.242: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 68841K(139444K), 0.0043660 secs]
> [Times: user=0.05 sys=0.00, real=0.00 secs]
> 934.242: [CMS-concurrent-sweep-start]
> 934.244: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 934.244: [CMS-concurrent-reset-start]
> 934.252: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 936.253: [GC [1 CMS-initial-mark: 12849K(21428K)] 68969K(139444K),
> 0.0077340 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 936.261: [CMS-concurrent-mark-start]
> 936.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 936.277: [CMS-concurrent-preclean-start]
> 936.278: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 936.278: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 941.284:
> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 941.284: [GC[YG occupancy: 56439 K (118016 K)]941.284: [Rescan
> (parallel) , 0.0059460 secs]941.290: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 69289K(139444K), 0.0060470 secs]
> [Times: user=0.08 sys=0.00, real=0.00 secs]
> 941.290: [CMS-concurrent-sweep-start]
> 941.293: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 941.293: [CMS-concurrent-reset-start]
> 941.302: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 943.302: [GC [1 CMS-initial-mark: 12849K(21428K)] 69417K(139444K),
> 0.0077760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 943.310: [CMS-concurrent-mark-start]
> 943.326: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 943.326: [CMS-concurrent-preclean-start]
> 943.327: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 943.327: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 948.340:
> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 948.340: [GC[YG occupancy: 56888 K (118016 K)]948.340: [Rescan
> (parallel) , 0.0047760 secs]948.345: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 69738K(139444K), 0.0048770 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 948.345: [CMS-concurrent-sweep-start]
> 948.347: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 948.347: [CMS-concurrent-reset-start]
> 948.356: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 950.356: [GC [1 CMS-initial-mark: 12849K(21428K)] 69866K(139444K),
> 0.0077750 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 950.364: [CMS-concurrent-mark-start]
> 950.380: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 950.380: [CMS-concurrent-preclean-start]
> 950.380: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 950.380: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 955.384:
> [CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 955.384: [GC[YG occupancy: 57336 K (118016 K)]955.384: [Rescan
> (parallel) , 0.0072540 secs]955.392: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 70186K(139444K), 0.0073540 secs]
> [Times: user=0.08 sys=0.00, real=0.00 secs]
> 955.392: [CMS-concurrent-sweep-start]
> 955.394: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 955.394: [CMS-concurrent-reset-start]
> 955.403: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 957.403: [GC [1 CMS-initial-mark: 12849K(21428K)] 70314K(139444K),
> 0.0078120 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 957.411: [CMS-concurrent-mark-start]
> 957.427: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 957.427: [CMS-concurrent-preclean-start]
> 957.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 957.427: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 962.437:
> [CMS-concurrent-abortable-preclean: 0.704/5.010 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 962.437: [GC[YG occupancy: 57889 K (118016 K)]962.437: [Rescan
> (parallel) , 0.0076140 secs]962.445: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 70739K(139444K), 0.0077160 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 962.445: [CMS-concurrent-sweep-start]
> 962.446: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 962.446: [CMS-concurrent-reset-start]
> 962.456: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 962.599: [GC [1 CMS-initial-mark: 12849K(21428K)] 70827K(139444K),
> 0.0081180 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 962.608: [CMS-concurrent-mark-start]
> 962.626: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 962.626: [CMS-concurrent-preclean-start]
> 962.626: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 962.626: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 967.632:
> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 967.632: [GC[YG occupancy: 58338 K (118016 K)]967.632: [Rescan
> (parallel) , 0.0061170 secs]967.638: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 71188K(139444K), 0.0062190 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 967.638: [CMS-concurrent-sweep-start]
> 967.640: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 967.640: [CMS-concurrent-reset-start]
> 967.648: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 969.648: [GC [1 CMS-initial-mark: 12849K(21428K)] 71316K(139444K),
> 0.0081110 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 969.656: [CMS-concurrent-mark-start]
> 969.674: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 969.674: [CMS-concurrent-preclean-start]
> 969.674: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 969.674: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 974.677:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:

> user=0.70 sys=0.00, real=5.00 secs]
> 974.677: [GC[YG occupancy: 58786 K (118016 K)]974.677: [Rescan
> (parallel) , 0.0070810 secs]974.685: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 71636K(139444K), 0.0072050 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 974.685: [CMS-concurrent-sweep-start]
> 974.686: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 974.686: [CMS-concurrent-reset-start]
> 974.695: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 976.696: [GC [1 CMS-initial-mark: 12849K(21428K)] 71764K(139444K),
> 0.0080650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 976.704: [CMS-concurrent-mark-start]
> 976.719: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 976.719: [CMS-concurrent-preclean-start]
> 976.719: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 976.719: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 981.727:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.69 sys=0.01, real=5.01 secs]
> 981.727: [GC[YG occupancy: 59235 K (118016 K)]981.727: [Rescan
> (parallel) , 0.0066570 secs]981.734: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 72085K(139444K), 0.0067620 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 981.734: [CMS-concurrent-sweep-start]
> 981.736: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 981.736: [CMS-concurrent-reset-start]
> 981.745: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 983.745: [GC [1 CMS-initial-mark: 12849K(21428K)] 72213K(139444K),
> 0.0081400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 983.753: [CMS-concurrent-mark-start]
> 983.769: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 983.769: [CMS-concurrent-preclean-start]
> 983.769: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 983.769: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 988.840:
> [CMS-concurrent-abortable-preclean: 0.716/5.071 secs] [Times:
> user=0.71 sys=0.00, real=5.07 secs]
> 988.840: [GC[YG occupancy: 59683 K (118016 K)]988.840: [Rescan
> (parallel) , 0.0076020 secs]988.848: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 72533K(139444K), 0.0077100 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 988.848: [CMS-concurrent-sweep-start]
> 988.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 988.850: [CMS-concurrent-reset-start]
> 988.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 990.858: [GC [1 CMS-initial-mark: 12849K(21428K)] 72661K(139444K),
> 0.0081810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 990.867: [CMS-concurrent-mark-start]
> 990.884: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 990.884: [CMS-concurrent-preclean-start]
> 990.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 990.885: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 995.999:
> [CMS-concurrent-abortable-preclean: 0.721/5.114 secs] [Times:
> user=0.73 sys=0.00, real=5.11 secs]
> 995.999: [GC[YG occupancy: 60307 K (118016 K)]995.999: [Rescan
> (parallel) , 0.0058190 secs]996.005: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 73156K(139444K), 0.0059260 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 996.005: [CMS-concurrent-sweep-start]
> 996.007: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 996.007: [CMS-concurrent-reset-start]
> 996.016: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 998.016: [GC [1 CMS-initial-mark: 12849K(21428K)] 73285K(139444K),
> 0.0052760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 998.022: [CMS-concurrent-mark-start]
> 998.038: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 998.038: [CMS-concurrent-preclean-start]
> 998.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 998.039: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1003.048:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1003.048: [GC[YG occupancy: 60755 K (118016 K)]1003.048: [Rescan
> (parallel) , 0.0068040 secs]1003.055: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 73605K(139444K), 0.0069060 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 1003.055: [CMS-concurrent-sweep-start]
> 1003.057: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1003.057: [CMS-concurrent-reset-start]
> 1003.066: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1005.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 73733K(139444K),
> 0.0082200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1005.075: [CMS-concurrent-mark-start]
> 1005.090: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1005.090: [CMS-concurrent-preclean-start]
> 1005.090: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1005.090: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1010.094:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1010.094: [GC[YG occupancy: 61203 K (118016 K)]1010.094: [Rescan
> (parallel) , 0.0066010 secs]1010.101: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 74053K(139444K), 0.0067120 secs]
> [Times: user=0.08 sys=0.00, real=0.00 secs]
> 1010.101: [CMS-concurrent-sweep-start]
> 1010.103: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1010.103: [CMS-concurrent-reset-start]
> 1010.112: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1012.113: [GC [1 CMS-initial-mark: 12849K(21428K)] 74181K(139444K),
> 0.0083460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1012.121: [CMS-concurrent-mark-start]
> 1012.137: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1012.137: [CMS-concurrent-preclean-start]
> 1012.138: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1012.138: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1017.144:
> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1017.144: [GC[YG occupancy: 61651 K (118016 K)]1017.144: [Rescan
> (parallel) , 0.0058810 secs]1017.150: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 74501K(139444K), 0.0059830 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 1017.151: [CMS-concurrent-sweep-start]
> 1017.153: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1017.153: [CMS-concurrent-reset-start]
> 1017.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1019.162: [GC [1 CMS-initial-mark: 12849K(21428K)] 74629K(139444K),
> 0.0083310 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1019.171: [CMS-concurrent-mark-start]
> 1019.187: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1019.187: [CMS-concurrent-preclean-start]
> 1019.187: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1019.187: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1024.261:
> [CMS-concurrent-abortable-preclean: 0.717/5.074 secs] [Times:
> user=0.72 sys=0.00, real=5.07 secs]
> 1024.261: [GC[YG occupancy: 62351 K (118016 K)]1024.262: [Rescan
> (parallel) , 0.0069720 secs]1024.269: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 75200K(139444K), 0.0070750 secs]
> [Times: user=0.08 sys=0.01, real=0.01 secs]
> 1024.269: [CMS-concurrent-sweep-start]
> 1024.270: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1024.270: [CMS-concurrent-reset-start]
> 1024.278: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1026.279: [GC [1 CMS-initial-mark: 12849K(21428K)] 75329K(139444K),
> 0.0086360 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1026.288: [CMS-concurrent-mark-start]
> 1026.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1026.305: [CMS-concurrent-preclean-start]
> 1026.305: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1026.305: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1031.308:
> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1031.308: [GC[YG occupancy: 62799 K (118016 K)]1031.308: [Rescan
> (parallel) , 0.0069330 secs]1031.315: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 75649K(139444K), 0.0070380 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1031.315: [CMS-concurrent-sweep-start]
> 1031.316: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1031.316: [CMS-concurrent-reset-start]
> 1031.326: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1033.326: [GC [1 CMS-initial-mark: 12849K(21428K)] 75777K(139444K),
> 0.0085850 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1033.335: [CMS-concurrent-mark-start]
> 1033.350: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1033.350: [CMS-concurrent-preclean-start]
> 1033.351: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1033.351: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1038.357:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.69 sys=0.01, real=5.01 secs]
> 1038.358: [GC[YG occupancy: 63247 K (118016 K)]1038.358: [Rescan
> (parallel) , 0.0071860 secs]1038.365: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 76097K(139444K), 0.0072900 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 1038.365: [CMS-concurrent-sweep-start]
> 1038.367: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1038.367: [CMS-concurrent-reset-start]
> 1038.376: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1040.376: [GC [1 CMS-initial-mark: 12849K(21428K)] 76225K(139444K),
> 0.0085910 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1040.385: [CMS-concurrent-mark-start]
> 1040.401: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1040.401: [CMS-concurrent-preclean-start]
> 1040.401: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1040.401: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1045.411:
> [CMS-concurrent-abortable-preclean: 0.705/5.010 secs] [Times:
> user=0.69 sys=0.01, real=5.01 secs]
> 1045.412: [GC[YG occupancy: 63695 K (118016 K)]1045.412: [Rescan
> (parallel) , 0.0082050 secs]1045.420: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 76545K(139444K), 0.0083110 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1045.420: [CMS-concurrent-sweep-start]
> 1045.421: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1045.421: [CMS-concurrent-reset-start]
> 1045.430: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1047.430: [GC [1 CMS-initial-mark: 12849K(21428K)] 76673K(139444K),
> 0.0086110 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1047.439: [CMS-concurrent-mark-start]
> 1047.456: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1047.456: [CMS-concurrent-preclean-start]
> 1047.456: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1047.456: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1052.462:
> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1052.462: [GC[YG occupancy: 64144 K (118016 K)]1052.462: [Rescan
> (parallel) , 0.0087770 secs]1052.471: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 76994K(139444K), 0.0088770 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1052.471: [CMS-concurrent-sweep-start]
> 1052.472: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1052.472: [CMS-concurrent-reset-start]
> 1052.481: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1052.628: [GC [1 CMS-initial-mark: 12849K(21428K)] 77058K(139444K),
> 0.0086170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1052.637: [CMS-concurrent-mark-start]
> 1052.655: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1052.655: [CMS-concurrent-preclean-start]
> 1052.656: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1052.656: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1057.658:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1057.658: [GC[YG occupancy: 64569 K (118016 K)]1057.658: [Rescan
> (parallel) , 0.0072850 secs]1057.665: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 77418K(139444K), 0.0073880 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1057.666: [CMS-concurrent-sweep-start]
> 1057.668: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1057.668: [CMS-concurrent-reset-start]
> 1057.677: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1059.677: [GC [1 CMS-initial-mark: 12849K(21428K)] 77547K(139444K),
> 0.0086820 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1059.686: [CMS-concurrent-mark-start]
> 1059.703: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1059.703: [CMS-concurrent-preclean-start]
> 1059.703: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1059.703: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1064.712:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1064.712: [GC[YG occupancy: 65017 K (118016 K)]1064.712: [Rescan
> (parallel) , 0.0071630 secs]1064.720: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 77867K(139444K), 0.0072700 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1064.720: [CMS-concurrent-sweep-start]
> 1064.722: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1064.722: [CMS-concurrent-reset-start]
> 1064.731: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1066.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 77995K(139444K),
> 0.0087640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1066.740: [CMS-concurrent-mark-start]
> 1066.757: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1066.757: [CMS-concurrent-preclean-start]
> 1066.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1066.757: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1071.821:
> [CMS-concurrent-abortable-preclean: 0.714/5.064 secs] [Times:
> user=0.71 sys=0.00, real=5.06 secs]
> 1071.822: [GC[YG occupancy: 65465 K (118016 K)]1071.822: [Rescan
> (parallel) , 0.0056280 secs]1071.827: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 78315K(139444K), 0.0057430 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 1071.828: [CMS-concurrent-sweep-start]
> 1071.830: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1071.830: [CMS-concurrent-reset-start]
> 1071.839: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1073.839: [GC [1 CMS-initial-mark: 12849K(21428K)] 78443K(139444K),
> 0.0087570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1073.848: [CMS-concurrent-mark-start]
> 1073.865: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1073.865: [CMS-concurrent-preclean-start]
> 1073.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1073.865: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1078.868:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1078.868: [GC[YG occupancy: 65914 K (118016 K)]1078.868: [Rescan
> (parallel) , 0.0055280 secs]1078.873: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 78763K(139444K), 0.0056320 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 1078.874: [CMS-concurrent-sweep-start]
> 1078.875: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1078.875: [CMS-concurrent-reset-start]
> 1078.884: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1080.884: [GC [1 CMS-initial-mark: 12849K(21428K)] 78892K(139444K),
> 0.0088520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1080.893: [CMS-concurrent-mark-start]
> 1080.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1080.909: [CMS-concurrent-preclean-start]
> 1080.909: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1080.909: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1086.021:
> [CMS-concurrent-abortable-preclean: 0.714/5.112 secs] [Times:
> user=0.72 sys=0.00, real=5.11 secs]
> 1086.021: [GC[YG occupancy: 66531 K (118016 K)]1086.022: [Rescan
> (parallel) , 0.0075330 secs]1086.029: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 79381K(139444K), 0.0076440 secs]
> [Times: user=0.09 sys=0.01, real=0.01 secs]
> 1086.029: [CMS-concurrent-sweep-start]
> 1086.031: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1086.031: [CMS-concurrent-reset-start]
> 1086.041: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1088.041: [GC [1 CMS-initial-mark: 12849K(21428K)] 79509K(139444K),
> 0.0091350 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1088.050: [CMS-concurrent-mark-start]
> 1088.066: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1088.067: [CMS-concurrent-preclean-start]
> 1088.067: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1088.067: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1093.070:
> [CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1093.071: [GC[YG occupancy: 66980 K (118016 K)]1093.071: [Rescan
> (parallel) , 0.0051870 secs]1093.076: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 79830K(139444K), 0.0052930 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 1093.076: [CMS-concurrent-sweep-start]
> 1093.078: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1093.078: [CMS-concurrent-reset-start]
> 1093.087: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1095.088: [GC [1 CMS-initial-mark: 12849K(21428K)] 79958K(139444K),
> 0.0091350 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1095.097: [CMS-concurrent-mark-start]
> 1095.114: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1095.114: [CMS-concurrent-preclean-start]
> 1095.115: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1095.115: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1100.121:
> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 1100.121: [GC[YG occupancy: 67428 K (118016 K)]1100.122: [Rescan
> (parallel) , 0.0068510 secs]1100.128: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 80278K(139444K), 0.0069510 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1100.129: [CMS-concurrent-sweep-start]
> 1100.130: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1100.130: [CMS-concurrent-reset-start]
> 1100.138: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1102.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 80406K(139444K),
> 0.0090760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1102.148: [CMS-concurrent-mark-start]
> 1102.165: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1102.165: [CMS-concurrent-preclean-start]
> 1102.165: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1102.165: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1107.168:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1107.168: [GC[YG occupancy: 67876 K (118016 K)]1107.168: [Rescan
> (parallel) , 0.0076420 secs]1107.176: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 80726K(139444K), 0.0077500 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1107.176: [CMS-concurrent-sweep-start]
> 1107.178: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1107.178: [CMS-concurrent-reset-start]
> 1107.187: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1109.188: [GC [1 CMS-initial-mark: 12849K(21428K)] 80854K(139444K),
> 0.0091510 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1109.197: [CMS-concurrent-mark-start]
> 1109.214: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1109.214: [CMS-concurrent-preclean-start]
> 1109.214: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1109.214: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1114.290:
> [CMS-concurrent-abortable-preclean: 0.711/5.076 secs] [Times:
> user=0.72 sys=0.00, real=5.07 secs]
> 1114.290: [GC[YG occupancy: 68473 K (118016 K)]1114.290: [Rescan
> (parallel) , 0.0084730 secs]1114.299: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 81322K(139444K), 0.0085810 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1114.299: [CMS-concurrent-sweep-start]
> 1114.301: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1114.301: [CMS-concurrent-reset-start]
> 1114.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1115.803: [GC [1 CMS-initial-mark: 12849K(21428K)] 81451K(139444K),
> 0.0106050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1115.814: [CMS-concurrent-mark-start]
> 1115.830: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1115.830: [CMS-concurrent-preclean-start]
> 1115.831: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1115.831: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1120.839:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1120.839: [GC[YG occupancy: 68921 K (118016 K)]1120.839: [Rescan
> (parallel) , 0.0088800 secs]1120.848: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 81771K(139444K), 0.0089910 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1120.848: [CMS-concurrent-sweep-start]
> 1120.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1120.850: [CMS-concurrent-reset-start]
> 1120.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1122.859: [GC [1 CMS-initial-mark: 12849K(21428K)] 81899K(139444K),
> 0.0092280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1122.868: [CMS-concurrent-mark-start]
> 1122.885: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1122.885: [CMS-concurrent-preclean-start]
> 1122.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1122.885: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1127.888:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 1127.888: [GC[YG occupancy: 69369 K (118016 K)]1127.888: [Rescan
> (parallel) , 0.0087740 secs]1127.897: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 82219K(139444K), 0.0088850 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1127.897: [CMS-concurrent-sweep-start]
> 1127.898: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1127.898: [CMS-concurrent-reset-start]
> 1127.906: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1129.907: [GC [1 CMS-initial-mark: 12849K(21428K)] 82347K(139444K),
> 0.0092280 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1129.916: [CMS-concurrent-mark-start]
> 1129.933: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1129.933: [CMS-concurrent-preclean-start]
> 1129.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1129.934: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1134.938:
> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1134.938: [GC[YG occupancy: 69818 K (118016 K)]1134.939: [Rescan
> (parallel) , 0.0078530 secs]1134.946: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 82667K(139444K), 0.0079630 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1134.947: [CMS-concurrent-sweep-start]
> 1134.948: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1134.948: [CMS-concurrent-reset-start]
> 1134.956: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1136.957: [GC [1 CMS-initial-mark: 12849K(21428K)] 82795K(139444K),
> 0.0092760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1136.966: [CMS-concurrent-mark-start]
> 1136.983: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1136.983: [CMS-concurrent-preclean-start]
> 1136.984: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
> 1136.984: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1141.991:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1141.991: [GC[YG occupancy: 70266 K (118016 K)]1141.991: [Rescan
> (parallel) , 0.0090620 secs]1142.000: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 83116K(139444K), 0.0091700 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1142.000: [CMS-concurrent-sweep-start]
> 1142.002: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1142.002: [CMS-concurrent-reset-start]
> 1142.011: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1142.657: [GC [1 CMS-initial-mark: 12849K(21428K)] 83390K(139444K),
> 0.0094330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1142.667: [CMS-concurrent-mark-start]
> 1142.685: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1142.685: [CMS-concurrent-preclean-start]
> 1142.686: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1142.686: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1147.688:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1147.688: [GC[YG occupancy: 70901 K (118016 K)]1147.688: [Rescan
> (parallel) , 0.0081170 secs]1147.696: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 83751K(139444K), 0.0082390 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1147.697: [CMS-concurrent-sweep-start]
> 1147.698: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1147.698: [CMS-concurrent-reset-start]
> 1147.706: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1149.706: [GC [1 CMS-initial-mark: 12849K(21428K)] 83879K(139444K),
> 0.0095560 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1149.716: [CMS-concurrent-mark-start]
> 1149.734: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1149.734: [CMS-concurrent-preclean-start]
> 1149.734: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1149.734: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1154.741:
> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1154.741: [GC[YG occupancy: 71349 K (118016 K)]1154.741: [Rescan
> (parallel) , 0.0090720 secs]1154.750: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 84199K(139444K), 0.0091780 secs]
> [Times: user=0.10 sys=0.01, real=0.01 secs]
> 1154.750: [CMS-concurrent-sweep-start]
> 1154.752: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1154.752: [CMS-concurrent-reset-start]
> 1154.762: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1155.021: [GC [1 CMS-initial-mark: 12849K(21428K)] 84199K(139444K),
> 0.0094030 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1155.031: [CMS-concurrent-mark-start]
> 1155.047: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1155.047: [CMS-concurrent-preclean-start]
> 1155.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1155.047: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1160.056:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1160.056: [GC[YG occupancy: 71669 K (118016 K)]1160.056: [Rescan
> (parallel) , 0.0056520 secs]1160.062: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 84519K(139444K), 0.0057790 secs]
> [Times: user=0.07 sys=0.00, real=0.00 secs]
> 1160.062: [CMS-concurrent-sweep-start]
> 1160.064: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1160.064: [CMS-concurrent-reset-start]
> 1160.073: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1162.074: [GC [1 CMS-initial-mark: 12849K(21428K)] 84647K(139444K),
> 0.0095040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1162.083: [CMS-concurrent-mark-start]
> 1162.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1162.098: [CMS-concurrent-preclean-start]
> 1162.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1162.099: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1167.102:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1167.102: [GC[YG occupancy: 72118 K (118016 K)]1167.102: [Rescan
> (parallel) , 0.0072180 secs]1167.110: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 84968K(139444K), 0.0073300 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 1167.110: [CMS-concurrent-sweep-start]
> 1167.112: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1167.112: [CMS-concurrent-reset-start]
> 1167.121: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1169.121: [GC [1 CMS-initial-mark: 12849K(21428K)] 85096K(139444K),
> 0.0096940 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1169.131: [CMS-concurrent-mark-start]
> 1169.147: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1169.147: [CMS-concurrent-preclean-start]
> 1169.147: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1169.147: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1174.197:
> [CMS-concurrent-abortable-preclean: 0.720/5.050 secs] [Times:
> user=0.72 sys=0.01, real=5.05 secs]
> 1174.198: [GC[YG occupancy: 72607 K (118016 K)]1174.198: [Rescan
> (parallel) , 0.0064910 secs]1174.204: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 85456K(139444K), 0.0065940 secs]
> [Times: user=0.06 sys=0.01, real=0.01 secs]
> 1174.204: [CMS-concurrent-sweep-start]
> 1174.206: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1174.206: [CMS-concurrent-reset-start]
> 1174.215: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1176.215: [GC [1 CMS-initial-mark: 12849K(21428K)] 85585K(139444K),
> 0.0095940 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1176.225: [CMS-concurrent-mark-start]
> 1176.240: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1176.240: [CMS-concurrent-preclean-start]
> 1176.241: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1176.241: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1181.244:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1181.244: [GC[YG occupancy: 73055 K (118016 K)]1181.244: [Rescan
> (parallel) , 0.0093030 secs]1181.254: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 85905K(139444K), 0.0094040 secs]
> [Times: user=0.09 sys=0.01, real=0.01 secs]
> 1181.254: [CMS-concurrent-sweep-start]
> 1181.256: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1181.256: [CMS-concurrent-reset-start]
> 1181.265: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1183.266: [GC [1 CMS-initial-mark: 12849K(21428K)] 86033K(139444K),
> 0.0096490 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1183.275: [CMS-concurrent-mark-start]
> 1183.293: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
> sys=0.00, real=0.02 secs]
> 1183.293: [CMS-concurrent-preclean-start]
> 1183.294: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1183.294: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1188.301:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1188.301: [GC[YG occupancy: 73503 K (118016 K)]1188.301: [Rescan
> (parallel) , 0.0092610 secs]1188.310: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 86353K(139444K), 0.0093750 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1188.310: [CMS-concurrent-sweep-start]
> 1188.312: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1188.312: [CMS-concurrent-reset-start]
> 1188.320: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1190.321: [GC [1 CMS-initial-mark: 12849K(21428K)] 86481K(139444K),
> 0.0097510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1190.331: [CMS-concurrent-mark-start]
> 1190.347: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1190.347: [CMS-concurrent-preclean-start]
> 1190.347: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1190.347: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1195.359:
> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1195.359: [GC[YG occupancy: 73952 K (118016 K)]1195.359: [Rescan
> (parallel) , 0.0093210 secs]1195.368: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 86801K(139444K), 0.0094330 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1195.369: [CMS-concurrent-sweep-start]
> 1195.370: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1195.370: [CMS-concurrent-reset-start]
> 1195.378: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1196.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 88001K(139444K),
> 0.0099870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1196.553: [CMS-concurrent-mark-start]
> 1196.570: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1196.570: [CMS-concurrent-preclean-start]
> 1196.570: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1196.570: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1201.574:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1201.574: [GC[YG occupancy: 75472 K (118016 K)]1201.574: [Rescan
> (parallel) , 0.0096480 secs]1201.584: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 88322K(139444K), 0.0097500 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1201.584: [CMS-concurrent-sweep-start]
> 1201.586: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1201.586: [CMS-concurrent-reset-start]
> 1201.595: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1202.679: [GC [1 CMS-initial-mark: 12849K(21428K)] 88491K(139444K),
> 0.0099400 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1202.690: [CMS-concurrent-mark-start]
> 1202.708: [CMS-concurrent-mark: 0.016/0.019 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1202.708: [CMS-concurrent-preclean-start]
> 1202.709: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1202.709: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1207.718:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1207.718: [GC[YG occupancy: 76109 K (118016 K)]1207.718: [Rescan
> (parallel) , 0.0096360 secs]1207.727: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 88959K(139444K), 0.0097380 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1207.728: [CMS-concurrent-sweep-start]
> 1207.729: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1207.729: [CMS-concurrent-reset-start]
> 1207.737: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1209.738: [GC [1 CMS-initial-mark: 12849K(21428K)] 89087K(139444K),
> 0.0099440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1209.748: [CMS-concurrent-mark-start]
> 1209.765: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1209.765: [CMS-concurrent-preclean-start]
> 1209.765: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1209.765: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1214.797:
> [CMS-concurrent-abortable-preclean: 0.716/5.031 secs] [Times:
> user=0.72 sys=0.00, real=5.03 secs]
> 1214.797: [GC[YG occupancy: 76557 K (118016 K)]1214.797: [Rescan
> (parallel) , 0.0096280 secs]1214.807: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 89407K(139444K), 0.0097320 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1214.807: [CMS-concurrent-sweep-start]
> 1214.808: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1214.808: [CMS-concurrent-reset-start]
> 1214.816: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1216.817: [GC [1 CMS-initial-mark: 12849K(21428K)] 89535K(139444K),
> 0.0099640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1216.827: [CMS-concurrent-mark-start]
> 1216.844: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1216.844: [CMS-concurrent-preclean-start]
> 1216.844: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1216.844: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1221.847:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1221.847: [GC[YG occupancy: 77005 K (118016 K)]1221.847: [Rescan
> (parallel) , 0.0061810 secs]1221.854: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 89855K(139444K), 0.0062950 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 1221.854: [CMS-concurrent-sweep-start]
> 1221.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1221.855: [CMS-concurrent-reset-start]
> 1221.864: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1223.865: [GC [1 CMS-initial-mark: 12849K(21428K)] 89983K(139444K),
> 0.0100430 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1223.875: [CMS-concurrent-mark-start]
> 1223.890: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1223.890: [CMS-concurrent-preclean-start]
> 1223.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1223.891: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1228.899:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1228.899: [GC[YG occupancy: 77454 K (118016 K)]1228.899: [Rescan
> (parallel) , 0.0095850 secs]1228.909: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 90304K(139444K), 0.0096960 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1228.909: [CMS-concurrent-sweep-start]
> 1228.911: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1228.911: [CMS-concurrent-reset-start]
> 1228.919: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1230.919: [GC [1 CMS-initial-mark: 12849K(21428K)] 90432K(139444K),
> 0.0101360 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1230.930: [CMS-concurrent-mark-start]
> 1230.946: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1230.946: [CMS-concurrent-preclean-start]
> 1230.947: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1230.947: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1235.952:
> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1235.952: [GC[YG occupancy: 77943 K (118016 K)]1235.952: [Rescan
> (parallel) , 0.0084420 secs]1235.961: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 90793K(139444K), 0.0085450 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1235.961: [CMS-concurrent-sweep-start]
> 1235.963: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1235.963: [CMS-concurrent-reset-start]
> 1235.972: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1237.973: [GC [1 CMS-initial-mark: 12849K(21428K)] 90921K(139444K),
> 0.0101280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1237.983: [CMS-concurrent-mark-start]
> 1237.998: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1237.998: [CMS-concurrent-preclean-start]
> 1237.999: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1237.999: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1243.008:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1243.008: [GC[YG occupancy: 78391 K (118016 K)]1243.008: [Rescan
> (parallel) , 0.0090510 secs]1243.017: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 91241K(139444K), 0.0091560 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1243.017: [CMS-concurrent-sweep-start]
> 1243.019: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1243.019: [CMS-concurrent-reset-start]
> 1243.027: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1245.027: [GC [1 CMS-initial-mark: 12849K(21428K)] 91369K(139444K),
> 0.0101080 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1245.038: [CMS-concurrent-mark-start]
> 1245.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1245.055: [CMS-concurrent-preclean-start]
> 1245.055: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1245.055: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1250.058:
> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1250.058: [GC[YG occupancy: 78839 K (118016 K)]1250.058: [Rescan
> (parallel) , 0.0096920 secs]1250.068: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 91689K(139444K), 0.0098040 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1250.068: [CMS-concurrent-sweep-start]
> 1250.070: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1250.070: [CMS-concurrent-reset-start]
> 1250.078: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1252.078: [GC [1 CMS-initial-mark: 12849K(21428K)] 91817K(139444K),
> 0.0102560 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1252.089: [CMS-concurrent-mark-start]
> 1252.105: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1252.105: [CMS-concurrent-preclean-start]
> 1252.106: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1252.106: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1257.113:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1257.113: [GC[YG occupancy: 79288 K (118016 K)]1257.113: [Rescan
> (parallel) , 0.0089920 secs]1257.122: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 92137K(139444K), 0.0090960 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1257.122: [CMS-concurrent-sweep-start]
> 1257.124: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1257.124: [CMS-concurrent-reset-start]
> 1257.133: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1259.134: [GC [1 CMS-initial-mark: 12849K(21428K)] 92266K(139444K),
> 0.0101720 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1259.144: [CMS-concurrent-mark-start]
> 1259.159: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 1259.159: [CMS-concurrent-preclean-start]
> 1259.159: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1259.159: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1264.229:
> [CMS-concurrent-abortable-preclean: 0.716/5.070 secs] [Times:
> user=0.72 sys=0.01, real=5.07 secs]
> 1264.229: [GC[YG occupancy: 79881 K (118016 K)]1264.229: [Rescan
> (parallel) , 0.0101320 secs]1264.240: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 92731K(139444K), 0.0102440 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1264.240: [CMS-concurrent-sweep-start]
> 1264.241: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1264.241: [CMS-concurrent-reset-start]
> 1264.250: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1266.250: [GC [1 CMS-initial-mark: 12849K(21428K)] 92859K(139444K),
> 0.0105180 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1266.261: [CMS-concurrent-mark-start]
> 1266.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1266.277: [CMS-concurrent-preclean-start]
> 1266.277: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1266.277: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1271.285:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1271.285: [GC[YG occupancy: 80330 K (118016 K)]1271.285: [Rescan
> (parallel) , 0.0094600 secs]1271.295: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 93180K(139444K), 0.0095600 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1271.295: [CMS-concurrent-sweep-start]
> 1271.297: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1271.297: [CMS-concurrent-reset-start]
> 1271.306: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1273.306: [GC [1 CMS-initial-mark: 12849K(21428K)] 93308K(139444K),
> 0.0104100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1273.317: [CMS-concurrent-mark-start]
> 1273.334: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1273.334: [CMS-concurrent-preclean-start]
> 1273.335: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1273.335: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1278.341:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1278.341: [GC[YG occupancy: 80778 K (118016 K)]1278.341: [Rescan
> (parallel) , 0.0101320 secs]1278.351: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 93628K(139444K), 0.0102460 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1278.351: [CMS-concurrent-sweep-start]
> 1278.353: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1278.353: [CMS-concurrent-reset-start]
> 1278.362: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1280.362: [GC [1 CMS-initial-mark: 12849K(21428K)] 93756K(139444K),
> 0.0105680 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1280.373: [CMS-concurrent-mark-start]
> 1280.388: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1280.388: [CMS-concurrent-preclean-start]
> 1280.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1280.388: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1285.400:
> [CMS-concurrent-abortable-preclean: 0.706/5.012 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1285.400: [GC[YG occupancy: 81262 K (118016 K)]1285.400: [Rescan
> (parallel) , 0.0093660 secs]1285.410: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 94111K(139444K), 0.0094820 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1285.410: [CMS-concurrent-sweep-start]
> 1285.411: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1285.411: [CMS-concurrent-reset-start]
> 1285.420: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1287.420: [GC [1 CMS-initial-mark: 12849K(21428K)] 94240K(139444K),
> 0.0105800 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1287.431: [CMS-concurrent-mark-start]
> 1287.447: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1287.447: [CMS-concurrent-preclean-start]
> 1287.447: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1287.447: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1292.460:
> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1292.460: [GC[YG occupancy: 81710 K (118016 K)]1292.460: [Rescan
> (parallel) , 0.0081130 secs]1292.468: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 94560K(139444K), 0.0082210 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1292.468: [CMS-concurrent-sweep-start]
> 1292.470: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1292.470: [CMS-concurrent-reset-start]
> 1292.480: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1292.712: [GC [1 CMS-initial-mark: 12849K(21428K)] 94624K(139444K),
> 0.0104870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1292.723: [CMS-concurrent-mark-start]
> 1292.739: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1292.739: [CMS-concurrent-preclean-start]
> 1292.740: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1292.740: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1297.748:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1297.748: [GC[YG occupancy: 82135 K (118016 K)]1297.748: [Rescan
> (parallel) , 0.0106180 secs]1297.759: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 94985K(139444K), 0.0107410 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1297.759: [CMS-concurrent-sweep-start]
> 1297.760: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1297.761: [CMS-concurrent-reset-start]
> 1297.769: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1299.769: [GC [1 CMS-initial-mark: 12849K(21428K)] 95113K(139444K),
> 0.0105340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1299.780: [CMS-concurrent-mark-start]
> 1299.796: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1299.796: [CMS-concurrent-preclean-start]
> 1299.797: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1299.797: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1304.805:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.69 sys=0.00, real=5.01 secs]
> 1304.805: [GC[YG occupancy: 82583 K (118016 K)]1304.806: [Rescan
> (parallel) , 0.0094010 secs]1304.815: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 95433K(139444K), 0.0095140 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1304.815: [CMS-concurrent-sweep-start]
> 1304.817: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1304.817: [CMS-concurrent-reset-start]
> 1304.827: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1306.827: [GC [1 CMS-initial-mark: 12849K(21428K)] 95561K(139444K),
> 0.0107300 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1306.838: [CMS-concurrent-mark-start]
> 1306.855: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1306.855: [CMS-concurrent-preclean-start]
> 1306.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1306.855: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1311.858:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1311.858: [GC[YG occupancy: 83032 K (118016 K)]1311.858: [Rescan
> (parallel) , 0.0094210 secs]1311.867: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 95882K(139444K), 0.0095360 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1311.868: [CMS-concurrent-sweep-start]
> 1311.869: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1311.869: [CMS-concurrent-reset-start]
> 1311.877: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1313.878: [GC [1 CMS-initial-mark: 12849K(21428K)] 96010K(139444K),
> 0.0107870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1313.889: [CMS-concurrent-mark-start]
> 1313.905: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1313.905: [CMS-concurrent-preclean-start]
> 1313.906: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1313.906: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1318.914:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1318.915: [GC[YG occupancy: 83481 K (118016 K)]1318.915: [Rescan
> (parallel) , 0.0096280 secs]1318.924: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 96331K(139444K), 0.0097340 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1318.925: [CMS-concurrent-sweep-start]
> 1318.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1318.927: [CMS-concurrent-reset-start]
> 1318.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1320.936: [GC [1 CMS-initial-mark: 12849K(21428K)] 96459K(139444K),
> 0.0106300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1320.947: [CMS-concurrent-mark-start]
> 1320.964: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1320.964: [CMS-concurrent-preclean-start]
> 1320.965: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1320.965: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1325.991:
> [CMS-concurrent-abortable-preclean: 0.717/5.026 secs] [Times:
> user=0.73 sys=0.00, real=5.02 secs]
> 1325.991: [GC[YG occupancy: 84205 K (118016 K)]1325.991: [Rescan
> (parallel) , 0.0097880 secs]1326.001: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 97055K(139444K), 0.0099010 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1326.001: [CMS-concurrent-sweep-start]
> 1326.003: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1326.003: [CMS-concurrent-reset-start]
> 1326.012: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1328.013: [GC [1 CMS-initial-mark: 12849K(21428K)] 97183K(139444K),
> 0.0109730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1328.024: [CMS-concurrent-mark-start]
> 1328.039: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1328.039: [CMS-concurrent-preclean-start]
> 1328.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1328.039: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1333.043:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1333.043: [GC[YG occupancy: 84654 K (118016 K)]1333.043: [Rescan
> (parallel) , 0.0110740 secs]1333.054: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 97504K(139444K), 0.0111760 secs]
> [Times: user=0.12 sys=0.01, real=0.02 secs]
> 1333.054: [CMS-concurrent-sweep-start]
> 1333.056: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1333.056: [CMS-concurrent-reset-start]
> 1333.065: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1335.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 97632K(139444K),
> 0.0109300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1335.077: [CMS-concurrent-mark-start]
> 1335.094: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1335.094: [CMS-concurrent-preclean-start]
> 1335.094: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1335.094: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1340.103:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1340.103: [GC[YG occupancy: 85203 K (118016 K)]1340.103: [Rescan
> (parallel) , 0.0109470 secs]1340.114: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 98052K(139444K), 0.0110500 secs]
> [Times: user=0.11 sys=0.01, real=0.02 secs]
> 1340.114: [CMS-concurrent-sweep-start]
> 1340.116: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1340.116: [CMS-concurrent-reset-start]
> 1340.125: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1342.126: [GC [1 CMS-initial-mark: 12849K(21428K)] 98181K(139444K),
> 0.0109170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1342.137: [CMS-concurrent-mark-start]
> 1342.154: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1342.154: [CMS-concurrent-preclean-start]
> 1342.154: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1342.154: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1347.161:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1347.162: [GC[YG occupancy: 85652 K (118016 K)]1347.162: [Rescan
> (parallel) , 0.0075610 secs]1347.169: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 98502K(139444K), 0.0076680 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1347.169: [CMS-concurrent-sweep-start]
> 1347.171: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1347.172: [CMS-concurrent-reset-start]
> 1347.181: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1349.181: [GC [1 CMS-initial-mark: 12849K(21428K)] 98630K(139444K),
> 0.0109540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1349.192: [CMS-concurrent-mark-start]
> 1349.208: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1349.208: [CMS-concurrent-preclean-start]
> 1349.208: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1349.208: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1354.268:
> [CMS-concurrent-abortable-preclean: 0.723/5.060 secs] [Times:
> user=0.73 sys=0.00, real=5.06 secs]
> 1354.268: [GC[YG occupancy: 86241 K (118016 K)]1354.268: [Rescan
> (parallel) , 0.0099530 secs]1354.278: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 99091K(139444K), 0.0100670 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1354.278: [CMS-concurrent-sweep-start]
> 1354.280: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1354.280: [CMS-concurrent-reset-start]
> 1354.288: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1356.289: [GC [1 CMS-initial-mark: 12849K(21428K)] 99219K(139444K),
> 0.0111450 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1356.300: [CMS-concurrent-mark-start]
> 1356.316: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1356.316: [CMS-concurrent-preclean-start]
> 1356.317: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1356.317: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1361.322:
> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1361.322: [GC[YG occupancy: 86690 K (118016 K)]1361.322: [Rescan
> (parallel) , 0.0097180 secs]1361.332: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 99540K(139444K), 0.0098210 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1361.332: [CMS-concurrent-sweep-start]
> 1361.333: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1361.333: [CMS-concurrent-reset-start]
> 1361.342: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1363.342: [GC [1 CMS-initial-mark: 12849K(21428K)] 99668K(139444K),
> 0.0110230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1363.354: [CMS-concurrent-mark-start]
> 1363.368: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1363.368: [CMS-concurrent-preclean-start]
> 1363.369: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1363.369: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1368.378:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1368.378: [GC[YG occupancy: 87139 K (118016 K)]1368.378: [Rescan
> (parallel) , 0.0100770 secs]1368.388: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 99989K(139444K), 0.0101900 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1368.388: [CMS-concurrent-sweep-start]
> 1368.390: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1368.390: [CMS-concurrent-reset-start]
> 1368.398: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1370.399: [GC [1 CMS-initial-mark: 12849K(21428K)] 100117K(139444K),
> 0.0111810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1370.410: [CMS-concurrent-mark-start]
> 1370.426: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1370.426: [CMS-concurrent-preclean-start]
> 1370.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1370.427: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1375.447:
> [CMS-concurrent-abortable-preclean: 0.715/5.020 secs] [Times:
> user=0.72 sys=0.00, real=5.02 secs]
> 1375.447: [GC[YG occupancy: 87588 K (118016 K)]1375.447: [Rescan
> (parallel) , 0.0101690 secs]1375.457: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 100438K(139444K), 0.0102730 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1375.457: [CMS-concurrent-sweep-start]
> 1375.459: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1375.459: [CMS-concurrent-reset-start]
> 1375.467: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1377.467: [GC [1 CMS-initial-mark: 12849K(21428K)] 100566K(139444K),
> 0.0110760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1377.478: [CMS-concurrent-mark-start]
> 1377.495: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1377.495: [CMS-concurrent-preclean-start]
> 1377.496: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1377.496: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1382.502:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 1382.502: [GC[YG occupancy: 89213 K (118016 K)]1382.502: [Rescan
> (parallel) , 0.0108630 secs]1382.513: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 102063K(139444K), 0.0109700 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1382.513: [CMS-concurrent-sweep-start]
> 1382.514: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1382.514: [CMS-concurrent-reset-start]
> 1382.523: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1382.743: [GC [1 CMS-initial-mark: 12849K(21428K)] 102127K(139444K),
> 0.0113140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1382.755: [CMS-concurrent-mark-start]
> 1382.773: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1382.773: [CMS-concurrent-preclean-start]
> 1382.774: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1382.774: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1387.777:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1387.777: [GC[YG occupancy: 89638 K (118016 K)]1387.777: [Rescan
> (parallel) , 0.0113310 secs]1387.789: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 102488K(139444K), 0.0114430 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1387.789: [CMS-concurrent-sweep-start]
> 1387.790: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1387.790: [CMS-concurrent-reset-start]
> 1387.799: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1389.799: [GC [1 CMS-initial-mark: 12849K(21428K)] 102617K(139444K),
> 0.0113540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1389.810: [CMS-concurrent-mark-start]
> 1389.827: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1389.827: [CMS-concurrent-preclean-start]
> 1389.827: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1389.827: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1394.831:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1394.831: [GC[YG occupancy: 90088 K (118016 K)]1394.831: [Rescan
> (parallel) , 0.0103790 secs]1394.841: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 102938K(139444K), 0.0104960 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1394.842: [CMS-concurrent-sweep-start]
> 1394.844: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1394.844: [CMS-concurrent-reset-start]
> 1394.853: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1396.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 103066K(139444K),
> 0.0114740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1396.865: [CMS-concurrent-mark-start]
> 1396.880: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1396.880: [CMS-concurrent-preclean-start]
> 1396.881: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1396.881: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1401.890:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1401.890: [GC[YG occupancy: 90537 K (118016 K)]1401.891: [Rescan
> (parallel) , 0.0116110 secs]1401.902: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 103387K(139444K), 0.0117240 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1401.902: [CMS-concurrent-sweep-start]
> 1401.904: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1401.904: [CMS-concurrent-reset-start]
> 1401.914: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1403.914: [GC [1 CMS-initial-mark: 12849K(21428K)] 103515K(139444K),
> 0.0111980 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1403.925: [CMS-concurrent-mark-start]
> 1403.943: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1403.943: [CMS-concurrent-preclean-start]
> 1403.944: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
> 1403.944: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1408.982:
> [CMS-concurrent-abortable-preclean: 0.718/5.038 secs] [Times:
> user=0.72 sys=0.00, real=5.03 secs]
> 1408.982: [GC[YG occupancy: 90986 K (118016 K)]1408.982: [Rescan
> (parallel) , 0.0115260 secs]1408.994: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 103836K(139444K), 0.0116320 secs]
> [Times: user=0.13 sys=0.00, real=0.02 secs]
> 1408.994: [CMS-concurrent-sweep-start]
> 1408.996: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1408.996: [CMS-concurrent-reset-start]
> 1409.005: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1411.005: [GC [1 CMS-initial-mark: 12849K(21428K)] 103964K(139444K),
> 0.0114590 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1411.017: [CMS-concurrent-mark-start]
> 1411.034: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1411.034: [CMS-concurrent-preclean-start]
> 1411.034: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1411.034: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1416.140:
> [CMS-concurrent-abortable-preclean: 0.712/5.105 secs] [Times:
> user=0.71 sys=0.00, real=5.10 secs]
> 1416.140: [GC[YG occupancy: 91476 K (118016 K)]1416.140: [Rescan
> (parallel) , 0.0114950 secs]1416.152: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 104326K(139444K), 0.0116020 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1416.152: [CMS-concurrent-sweep-start]
> 1416.154: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1416.154: [CMS-concurrent-reset-start]
> 1416.163: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1418.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 104454K(139444K),
> 0.0114040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1418.175: [CMS-concurrent-mark-start]
> 1418.191: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1418.191: [CMS-concurrent-preclean-start]
> 1418.191: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1418.191: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1423.198:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1423.199: [GC[YG occupancy: 91925 K (118016 K)]1423.199: [Rescan
> (parallel) , 0.0105460 secs]1423.209: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 104775K(139444K), 0.0106640 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1423.209: [CMS-concurrent-sweep-start]
> 1423.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1423.211: [CMS-concurrent-reset-start]
> 1423.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1425.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 104903K(139444K),
> 0.0116300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1425.232: [CMS-concurrent-mark-start]
> 1425.248: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1425.248: [CMS-concurrent-preclean-start]
> 1425.248: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1425.248: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1430.252:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1430.252: [GC[YG occupancy: 92374 K (118016 K)]1430.252: [Rescan
> (parallel) , 0.0098720 secs]1430.262: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 105224K(139444K), 0.0099750 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1430.262: [CMS-concurrent-sweep-start]
> 1430.264: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1430.264: [CMS-concurrent-reset-start]
> 1430.273: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1432.274: [GC [1 CMS-initial-mark: 12849K(21428K)] 105352K(139444K),
> 0.0114050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1432.285: [CMS-concurrent-mark-start]
> 1432.301: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1432.301: [CMS-concurrent-preclean-start]
> 1432.301: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1432.301: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1437.304:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1437.305: [GC[YG occupancy: 92823 K (118016 K)]1437.305: [Rescan
> (parallel) , 0.0115010 secs]1437.316: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 105673K(139444K), 0.0116090 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1437.316: [CMS-concurrent-sweep-start]
> 1437.319: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1437.319: [CMS-concurrent-reset-start]
> 1437.328: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1439.328: [GC [1 CMS-initial-mark: 12849K(21428K)] 105801K(139444K),
> 0.0115740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1439.340: [CMS-concurrent-mark-start]
> 1439.356: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1439.356: [CMS-concurrent-preclean-start]
> 1439.356: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1439.356: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1444.411:
> [CMS-concurrent-abortable-preclean: 0.715/5.054 secs] [Times:
> user=0.72 sys=0.00, real=5.05 secs]
> 1444.411: [GC[YG occupancy: 93547 K (118016 K)]1444.411: [Rescan
> (parallel) , 0.0072910 secs]1444.418: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 106397K(139444K), 0.0073970 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1444.419: [CMS-concurrent-sweep-start]
> 1444.420: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1444.420: [CMS-concurrent-reset-start]
> 1444.429: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1446.429: [GC [1 CMS-initial-mark: 12849K(21428K)] 106525K(139444K),
> 0.0117950 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1446.441: [CMS-concurrent-mark-start]
> 1446.457: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1446.457: [CMS-concurrent-preclean-start]
> 1446.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1446.458: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1451.461:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1451.461: [GC[YG occupancy: 93996 K (118016 K)]1451.461: [Rescan
> (parallel) , 0.0120870 secs]1451.473: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 106846K(139444K), 0.0121920 secs]
> [Times: user=0.14 sys=0.00, real=0.02 secs]
> 1451.473: [CMS-concurrent-sweep-start]
> 1451.476: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1451.476: [CMS-concurrent-reset-start]
> 1451.485: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1453.485: [GC [1 CMS-initial-mark: 12849K(21428K)] 106974K(139444K),
> 0.0117990 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1453.497: [CMS-concurrent-mark-start]
> 1453.514: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1453.514: [CMS-concurrent-preclean-start]
> 1453.515: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1453.515: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1458.518:
> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1458.518: [GC[YG occupancy: 94445 K (118016 K)]1458.518: [Rescan
> (parallel) , 0.0123720 secs]1458.530: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 107295K(139444K), 0.0124750 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1458.530: [CMS-concurrent-sweep-start]
> 1458.532: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1458.532: [CMS-concurrent-reset-start]
> 1458.540: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1460.541: [GC [1 CMS-initial-mark: 12849K(21428K)] 107423K(139444K),
> 0.0118680 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1460.553: [CMS-concurrent-mark-start]
> 1460.568: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1460.568: [CMS-concurrent-preclean-start]
> 1460.569: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1460.569: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1465.577:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1465.577: [GC[YG occupancy: 94894 K (118016 K)]1465.577: [Rescan
> (parallel) , 0.0119100 secs]1465.589: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 107744K(139444K), 0.0120270 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1465.590: [CMS-concurrent-sweep-start]
> 1465.591: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1465.591: [CMS-concurrent-reset-start]
> 1465.600: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1467.600: [GC [1 CMS-initial-mark: 12849K(21428K)] 107937K(139444K),
> 0.0120020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1467.612: [CMS-concurrent-mark-start]
> 1467.628: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1467.628: [CMS-concurrent-preclean-start]
> 1467.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1467.628: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1472.636:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1472.637: [GC[YG occupancy: 95408 K (118016 K)]1472.637: [Rescan
> (parallel) , 0.0119090 secs]1472.649: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 108257K(139444K), 0.0120260 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1472.649: [CMS-concurrent-sweep-start]
> 1472.650: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1472.650: [CMS-concurrent-reset-start]
> 1472.659: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1472.775: [GC [1 CMS-initial-mark: 12849K(21428K)] 108365K(139444K),
> 0.0120260 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1472.787: [CMS-concurrent-mark-start]
> 1472.805: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1472.805: [CMS-concurrent-preclean-start]
> 1472.806: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.01 sys=0.00, real=0.00 secs]
> 1472.806: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1477.808:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1477.808: [GC[YG occupancy: 95876 K (118016 K)]1477.808: [Rescan
> (parallel) , 0.0099490 secs]1477.818: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 108726K(139444K), 0.0100580 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1477.818: [CMS-concurrent-sweep-start]
> 1477.820: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1477.820: [CMS-concurrent-reset-start]
> 1477.828: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1479.829: [GC [1 CMS-initial-mark: 12849K(21428K)] 108854K(139444K),
> 0.0119550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1479.841: [CMS-concurrent-mark-start]
> 1479.857: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1479.857: [CMS-concurrent-preclean-start]
> 1479.857: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1479.857: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1484.870:
> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1484.870: [GC[YG occupancy: 96325 K (118016 K)]1484.870: [Rescan
> (parallel) , 0.0122870 secs]1484.882: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 109175K(139444K), 0.0123900 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1484.882: [CMS-concurrent-sweep-start]
> 1484.884: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1484.884: [CMS-concurrent-reset-start]
> 1484.893: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1486.893: [GC [1 CMS-initial-mark: 12849K(21428K)] 109304K(139444K),
> 0.0118470 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1486.905: [CMS-concurrent-mark-start]
> 1486.921: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1486.921: [CMS-concurrent-preclean-start]
> 1486.921: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1486.921: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1491.968:
> [CMS-concurrent-abortable-preclean: 0.720/5.047 secs] [Times:
> user=0.72 sys=0.00, real=5.05 secs]
> 1491.968: [GC[YG occupancy: 96774 K (118016 K)]1491.968: [Rescan
> (parallel) , 0.0122850 secs]1491.981: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 109624K(139444K), 0.0123880 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1491.981: [CMS-concurrent-sweep-start]
> 1491.982: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1491.982: [CMS-concurrent-reset-start]
> 1491.991: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1493.991: [GC [1 CMS-initial-mark: 12849K(21428K)] 109753K(139444K),
> 0.0119790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1494.004: [CMS-concurrent-mark-start]
> 1494.019: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1494.019: [CMS-concurrent-preclean-start]
> 1494.019: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1494.019: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1499.100:
> [CMS-concurrent-abortable-preclean: 0.722/5.080 secs] [Times:
> user=0.72 sys=0.00, real=5.08 secs]
> 1499.100: [GC[YG occupancy: 98295 K (118016 K)]1499.100: [Rescan
> (parallel) , 0.0123180 secs]1499.112: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 111145K(139444K), 0.0124240 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1499.113: [CMS-concurrent-sweep-start]
> 1499.114: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1499.114: [CMS-concurrent-reset-start]
> 1499.123: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1501.123: [GC [1 CMS-initial-mark: 12849K(21428K)] 111274K(139444K),
> 0.0117720 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
> 1501.135: [CMS-concurrent-mark-start]
> 1501.150: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 1501.150: [CMS-concurrent-preclean-start]
> 1501.151: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.01 sys=0.00, real=0.00 secs]
> 1501.151: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1506.172:
> [CMS-concurrent-abortable-preclean: 0.712/5.022 secs] [Times:
> user=0.71 sys=0.00, real=5.02 secs]
> 1506.172: [GC[YG occupancy: 98890 K (118016 K)]1506.173: [Rescan
> (parallel) , 0.0113790 secs]1506.184: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 111740K(139444K), 0.0114830 secs]
> [Times: user=0.13 sys=0.00, real=0.02 secs]
> 1506.184: [CMS-concurrent-sweep-start]
> 1506.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1506.186: [CMS-concurrent-reset-start]
> 1506.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1508.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 111868K(139444K),
> 0.0122930 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1508.208: [CMS-concurrent-mark-start]
> 1508.225: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1508.225: [CMS-concurrent-preclean-start]
> 1508.225: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1508.226: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1513.232:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1513.232: [GC[YG occupancy: 99339 K (118016 K)]1513.232: [Rescan
> (parallel) , 0.0123890 secs]1513.244: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 112189K(139444K), 0.0124930 secs]
> [Times: user=0.14 sys=0.00, real=0.02 secs]
> 1513.245: [CMS-concurrent-sweep-start]
> 1513.246: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1513.246: [CMS-concurrent-reset-start]
> 1513.255: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1515.256: [GC [1 CMS-initial-mark: 12849K(21428K)] 113182K(139444K),
> 0.0123210 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1515.268: [CMS-concurrent-mark-start]
> 1515.285: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1515.285: [CMS-concurrent-preclean-start]
> 1515.285: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1515.285: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1520.290:
> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1520.290: [GC[YG occupancy: 100653 K (118016 K)]1520.290: [Rescan
> (parallel) , 0.0125490 secs]1520.303: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 113502K(139444K), 0.0126520 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1520.303: [CMS-concurrent-sweep-start]
> 1520.304: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1520.304: [CMS-concurrent-reset-start]
> 1520.313: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1522.314: [GC [1 CMS-initial-mark: 12849K(21428K)] 113631K(139444K),
> 0.0118790 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1522.326: [CMS-concurrent-mark-start]
> 1522.343: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1522.343: [CMS-concurrent-preclean-start]
> 1522.343: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1522.343: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1527.350:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1527.350: [GC[YG occupancy: 101102 K (118016 K)]1527.350: [Rescan
> (parallel) , 0.0127460 secs]1527.363: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 113952K(139444K), 0.0128490 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1527.363: [CMS-concurrent-sweep-start]
> 1527.365: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1527.365: [CMS-concurrent-reset-start]
> 1527.374: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1529.374: [GC [1 CMS-initial-mark: 12849K(21428K)] 114080K(139444K),
> 0.0117550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1529.386: [CMS-concurrent-mark-start]
> 1529.403: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1529.404: [CMS-concurrent-preclean-start]
> 1529.404: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1529.404: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1534.454:
> [CMS-concurrent-abortable-preclean: 0.712/5.050 secs] [Times:
> user=0.70 sys=0.01, real=5.05 secs]
> 1534.454: [GC[YG occupancy: 101591 K (118016 K)]1534.454: [Rescan
> (parallel) , 0.0122680 secs]1534.466: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 114441K(139444K), 0.0123750 secs]
> [Times: user=0.12 sys=0.02, real=0.01 secs]
> 1534.466: [CMS-concurrent-sweep-start]
> 1534.468: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1534.468: [CMS-concurrent-reset-start]
> 1534.478: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1536.478: [GC [1 CMS-initial-mark: 12849K(21428K)] 114570K(139444K),
> 0.0125250 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1536.491: [CMS-concurrent-mark-start]
> 1536.507: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1536.507: [CMS-concurrent-preclean-start]
> 1536.507: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1536.507: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1541.516:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1541.516: [GC[YG occupancy: 102041 K (118016 K)]1541.516: [Rescan
> (parallel) , 0.0088270 secs]1541.525: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 114890K(139444K), 0.0089300 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1541.525: [CMS-concurrent-sweep-start]
> 1541.527: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1541.527: [CMS-concurrent-reset-start]
> 1541.537: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1543.537: [GC [1 CMS-initial-mark: 12849K(21428K)] 115019K(139444K),
> 0.0124500 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1543.550: [CMS-concurrent-mark-start]
> 1543.566: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1543.566: [CMS-concurrent-preclean-start]
> 1543.567: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1543.567: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1548.578:
> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1548.578: [GC[YG occupancy: 102490 K (118016 K)]1548.578: [Rescan
> (parallel) , 0.0100430 secs]1548.588: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 115340K(139444K), 0.0101440 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1548.588: [CMS-concurrent-sweep-start]
> 1548.589: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1548.589: [CMS-concurrent-reset-start]
> 1548.598: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1550.598: [GC [1 CMS-initial-mark: 12849K(21428K)] 115468K(139444K),
> 0.0125070 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1550.611: [CMS-concurrent-mark-start]
> 1550.627: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1550.627: [CMS-concurrent-preclean-start]
> 1550.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1550.628: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1555.631:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1555.631: [GC[YG occupancy: 103003 K (118016 K)]1555.631: [Rescan
> (parallel) , 0.0117610 secs]1555.643: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 115853K(139444K), 0.0118770 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1555.643: [CMS-concurrent-sweep-start]
> 1555.645: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1555.645: [CMS-concurrent-reset-start]
> 1555.655: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1557.655: [GC [1 CMS-initial-mark: 12849K(21428K)] 115981K(139444K),
> 0.0126720 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1557.668: [CMS-concurrent-mark-start]
> 1557.685: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1557.685: [CMS-concurrent-preclean-start]
> 1557.685: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1557.685: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1562.688:
> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1562.688: [GC[YG occupancy: 103557 K (118016 K)]1562.688: [Rescan
> (parallel) , 0.0121530 secs]1562.700: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 116407K(139444K), 0.0122560 secs]
> [Times: user=0.16 sys=0.00, real=0.01 secs]
> 1562.700: [CMS-concurrent-sweep-start]
> 1562.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1562.701: [CMS-concurrent-reset-start]
> 1562.710: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1562.821: [GC [1 CMS-initial-mark: 12849K(21428K)] 116514K(139444K),
> 0.0127240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1562.834: [CMS-concurrent-mark-start]
> 1562.852: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1562.852: [CMS-concurrent-preclean-start]
> 1562.853: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1562.853: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1567.859:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1567.859: [GC[YG occupancy: 104026 K (118016 K)]1567.859: [Rescan
> (parallel) , 0.0131290 secs]1567.872: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 116876K(139444K), 0.0132470 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1567.873: [CMS-concurrent-sweep-start]
> 1567.874: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1567.874: [CMS-concurrent-reset-start]
> 1567.883: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1569.883: [GC [1 CMS-initial-mark: 12849K(21428K)] 117103K(139444K),
> 0.0123770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1569.896: [CMS-concurrent-mark-start]
> 1569.913: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1569.913: [CMS-concurrent-preclean-start]
> 1569.913: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
> 1569.913: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1574.920:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1574.920: [GC[YG occupancy: 104510 K (118016 K)]1574.920: [Rescan
> (parallel) , 0.0122810 secs]1574.932: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 117360K(139444K), 0.0123870 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1574.933: [CMS-concurrent-sweep-start]
> 1574.935: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1574.935: [CMS-concurrent-reset-start]
> 1574.944: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1575.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 117360K(139444K),
> 0.0121590 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
> 1575.176: [CMS-concurrent-mark-start]
> 1575.193: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1575.193: [CMS-concurrent-preclean-start]
> 1575.193: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
> 1575.193: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1580.197:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.71 sys=0.00, real=5.00 secs]
> 1580.197: [GC[YG occupancy: 104831 K (118016 K)]1580.197: [Rescan
> (parallel) , 0.0129860 secs]1580.210: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 117681K(139444K), 0.0130980 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1580.210: [CMS-concurrent-sweep-start]
> 1580.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1580.211: [CMS-concurrent-reset-start]
> 1580.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1582.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 117809K(139444K),
> 0.0129700 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1582.234: [CMS-concurrent-mark-start]
> 1582.249: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
> sys=0.01, real=0.02 secs]
> 1582.249: [CMS-concurrent-preclean-start]
> 1582.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1582.249: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1587.262:
> [CMS-concurrent-abortable-preclean: 0.707/5.013 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1587.262: [GC[YG occupancy: 105280 K (118016 K)]1587.262: [Rescan
> (parallel) , 0.0134570 secs]1587.276: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 118130K(139444K), 0.0135720 secs]
> [Times: user=0.15 sys=0.00, real=0.02 secs]
> 1587.276: [CMS-concurrent-sweep-start]
> 1587.278: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1587.278: [CMS-concurrent-reset-start]
> 1587.287: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1589.287: [GC [1 CMS-initial-mark: 12849K(21428K)] 118258K(139444K),
> 0.0130010 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1589.301: [CMS-concurrent-mark-start]
> 1589.316: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1589.316: [CMS-concurrent-preclean-start]
> 1589.316: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1589.316: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1594.364:
> [CMS-concurrent-abortable-preclean: 0.712/5.048 secs] [Times:
> user=0.71 sys=0.00, real=5.05 secs]
> 1594.365: [GC[YG occupancy: 105770 K (118016 K)]1594.365: [Rescan
> (parallel) , 0.0131190 secs]1594.378: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 118620K(139444K), 0.0132380 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1594.378: [CMS-concurrent-sweep-start]
> 1594.380: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1594.380: [CMS-concurrent-reset-start]
> 1594.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1596.390: [GC [1 CMS-initial-mark: 12849K(21428K)] 118748K(139444K),
> 0.0130650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1596.403: [CMS-concurrent-mark-start]
> 1596.418: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1596.418: [CMS-concurrent-preclean-start]
> 1596.419: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1596.419: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1601.422:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 1601.422: [GC[YG occupancy: 106219 K (118016 K)]1601.422: [Rescan
> (parallel) , 0.0130310 secs]1601.435: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 119069K(139444K), 0.0131490 secs]
> [Times: user=0.16 sys=0.00, real=0.02 secs]
> 1601.435: [CMS-concurrent-sweep-start]
> 1601.437: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1601.437: [CMS-concurrent-reset-start]
> 1601.446: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1603.447: [GC [1 CMS-initial-mark: 12849K(21428K)] 119197K(139444K),
> 0.0130220 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1603.460: [CMS-concurrent-mark-start]
> 1603.476: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1603.476: [CMS-concurrent-preclean-start]
> 1603.476: [CMS-concurrent-preclean: 0.000/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1603.476: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1608.478:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1608.478: [GC[YG occupancy: 106668 K (118016 K)]1608.479: [Rescan
> (parallel) , 0.0122680 secs]1608.491: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 119518K(139444K), 0.0123790 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1608.491: [CMS-concurrent-sweep-start]
> 1608.492: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1608.492: [CMS-concurrent-reset-start]
> 1608.501: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1610.502: [GC [1 CMS-initial-mark: 12849K(21428K)] 119646K(139444K),
> 0.0130770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1610.515: [CMS-concurrent-mark-start]
> 1610.530: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1610.530: [CMS-concurrent-preclean-start]
> 1610.530: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1610.530: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1615.536:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1615.536: [GC[YG occupancy: 107117 K (118016 K)]1615.536: [Rescan
> (parallel) , 0.0125470 secs]1615.549: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 119967K(139444K), 0.0126510 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1615.549: [CMS-concurrent-sweep-start]
> 1615.551: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1615.551: [CMS-concurrent-reset-start]
> 1615.561: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1617.561: [GC [1 CMS-initial-mark: 12849K(21428K)] 120095K(139444K),
> 0.0129520 secs] [Times: user=0.00 sys=0.00, real=0.02 secs]
> 1617.574: [CMS-concurrent-mark-start]
> 1617.591: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1617.591: [CMS-concurrent-preclean-start]
> 1617.591: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1617.591: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1622.598:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1622.598: [GC[YG occupancy: 107777 K (118016 K)]1622.599: [Rescan
> (parallel) , 0.0140340 secs]1622.613: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 120627K(139444K), 0.0141520 secs]
> [Times: user=0.16 sys=0.00, real=0.01 secs]
> 1622.613: [CMS-concurrent-sweep-start]
> 1622.614: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1622.614: [CMS-concurrent-reset-start]
> 1622.623: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.02 secs]
> 1622.848: [GC [1 CMS-initial-mark: 12849K(21428K)] 120691K(139444K),
> 0.0133410 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1622.861: [CMS-concurrent-mark-start]
> 1622.878: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1622.878: [CMS-concurrent-preclean-start]
> 1622.879: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1622.879: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1627.941:
> [CMS-concurrent-abortable-preclean: 0.656/5.062 secs] [Times:
> user=0.65 sys=0.00, real=5.06 secs]
> 1627.941: [GC[YG occupancy: 108202 K (118016 K)]1627.941: [Rescan
> (parallel) , 0.0135120 secs]1627.955: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 121052K(139444K), 0.0136620 secs]
> [Times: user=0.15 sys=0.00, real=0.02 secs]
> 1627.955: [CMS-concurrent-sweep-start]
> 1627.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1627.956: [CMS-concurrent-reset-start]
> 1627.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1629.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 121180K(139444K),
> 0.0133770 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1629.979: [CMS-concurrent-mark-start]
> 1629.995: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1629.995: [CMS-concurrent-preclean-start]
> 1629.996: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1629.996: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1634.998:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1634.999: [GC[YG occupancy: 108651 K (118016 K)]1634.999: [Rescan
> (parallel) , 0.0134300 secs]1635.012: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 121501K(139444K), 0.0135530 secs]
> [Times: user=0.16 sys=0.00, real=0.01 secs]
> 1635.012: [CMS-concurrent-sweep-start]
> 1635.014: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1635.014: [CMS-concurrent-reset-start]
> 1635.023: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1637.023: [GC [1 CMS-initial-mark: 12849K(21428K)] 121629K(139444K),
> 0.0127330 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1637.036: [CMS-concurrent-mark-start]
> 1637.053: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1637.054: [CMS-concurrent-preclean-start]
> 1637.054: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1637.054: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1642.062:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1642.062: [GC[YG occupancy: 109100 K (118016 K)]1642.062: [Rescan
> (parallel) , 0.0124310 secs]1642.075: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 121950K(139444K), 0.0125510 secs]
> [Times: user=0.16 sys=0.00, real=0.02 secs]
> 1642.075: [CMS-concurrent-sweep-start]
> 1642.077: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1642.077: [CMS-concurrent-reset-start]
> 1642.086: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1644.087: [GC [1 CMS-initial-mark: 12849K(21428K)] 122079K(139444K),
> 0.0134300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1644.100: [CMS-concurrent-mark-start]
> 1644.116: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1644.116: [CMS-concurrent-preclean-start]
> 1644.116: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1644.116: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1649.125:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1649.126: [GC[YG occupancy: 109549 K (118016 K)]1649.126: [Rescan
> (parallel) , 0.0126870 secs]1649.138: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 122399K(139444K), 0.0128010 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1649.139: [CMS-concurrent-sweep-start]
> 1649.141: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1649.141: [CMS-concurrent-reset-start]
> 1649.150: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1651.150: [GC [1 CMS-initial-mark: 12849K(21428K)] 122528K(139444K),
> 0.0134790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1651.164: [CMS-concurrent-mark-start]
> 1651.179: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1651.179: [CMS-concurrent-preclean-start]
> 1651.179: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1651.179: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1656.254:
> [CMS-concurrent-abortable-preclean: 0.722/5.074 secs] [Times:
> user=0.71 sys=0.01, real=5.07 secs]
> 1656.254: [GC[YG occupancy: 110039 K (118016 K)]1656.254: [Rescan
> (parallel) , 0.0092110 secs]1656.263: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 122889K(139444K), 0.0093170 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1656.263: [CMS-concurrent-sweep-start]
> 1656.266: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1656.266: [CMS-concurrent-reset-start]
> 1656.275: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1658.275: [GC [1 CMS-initial-mark: 12849K(21428K)] 123017K(139444K),
> 0.0134150 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1658.289: [CMS-concurrent-mark-start]
> 1658.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1658.306: [CMS-concurrent-preclean-start]
> 1658.306: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1658.306: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1663.393:
> [CMS-concurrent-abortable-preclean: 0.711/5.087 secs] [Times:
> user=0.71 sys=0.00, real=5.08 secs]
> 1663.393: [GC[YG occupancy: 110488 K (118016 K)]1663.393: [Rescan
> (parallel) , 0.0132450 secs]1663.406: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 123338K(139444K), 0.0133600 secs]
> [Times: user=0.15 sys=0.00, real=0.02 secs]
> 1663.407: [CMS-concurrent-sweep-start]
> 1663.409: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1663.409: [CMS-concurrent-reset-start]
> 1663.418: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1665.418: [GC [1 CMS-initial-mark: 12849K(21428K)] 123467K(139444K),
> 0.0135570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1665.432: [CMS-concurrent-mark-start]
> 1665.447: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1665.447: [CMS-concurrent-preclean-start]
> 1665.448: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1665.448: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1670.457:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1670.457: [GC[YG occupancy: 110937 K (118016 K)]1670.457: [Rescan
> (parallel) , 0.0142820 secs]1670.471: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 123787K(139444K), 0.0144010 secs]
> [Times: user=0.16 sys=0.00, real=0.01 secs]
> 1670.472: [CMS-concurrent-sweep-start]
> 1670.473: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1670.473: [CMS-concurrent-reset-start]
> 1670.482: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1672.482: [GC [1 CMS-initial-mark: 12849K(21428K)] 123916K(139444K),
> 0.0136110 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1672.496: [CMS-concurrent-mark-start]
> 1672.513: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1672.513: [CMS-concurrent-preclean-start]
> 1672.513: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1672.513: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1677.530:
> [CMS-concurrent-abortable-preclean: 0.711/5.017 secs] [Times:
> user=0.71 sys=0.00, real=5.02 secs]
> 1677.530: [GC[YG occupancy: 111387 K (118016 K)]1677.530: [Rescan
> (parallel) , 0.0129210 secs]1677.543: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 124236K(139444K), 0.0130360 secs]
> [Times: user=0.16 sys=0.00, real=0.02 secs]
> 1677.543: [CMS-concurrent-sweep-start]
> 1677.545: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1677.545: [CMS-concurrent-reset-start]
> 1677.554: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1679.554: [GC [1 CMS-initial-mark: 12849K(21428K)] 124365K(139444K),
> 0.0125140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1679.567: [CMS-concurrent-mark-start]
> 1679.584: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1679.584: [CMS-concurrent-preclean-start]
> 1679.584: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1679.584: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1684.631:
> [CMS-concurrent-abortable-preclean: 0.714/5.047 secs] [Times:
> user=0.72 sys=0.00, real=5.04 secs]
> 1684.631: [GC[YG occupancy: 112005 K (118016 K)]1684.631: [Rescan
> (parallel) , 0.0146760 secs]1684.646: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 124855K(139444K), 0.0147930 secs]
> [Times: user=0.16 sys=0.00, real=0.02 secs]
> 1684.646: [CMS-concurrent-sweep-start]
> 1684.648: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1684.648: [CMS-concurrent-reset-start]
> 1684.656: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1686.656: [GC [1 CMS-initial-mark: 12849K(21428K)] 125048K(139444K),
> 0.0138340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1686.670: [CMS-concurrent-mark-start]
> 1686.686: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1686.686: [CMS-concurrent-preclean-start]
> 1686.687: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1686.687: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1691.689:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1691.689: [GC[YG occupancy: 112518 K (118016 K)]1691.689: [Rescan
> (parallel) , 0.0142600 secs]1691.703: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 125368K(139444K), 0.0143810 secs]
> [Times: user=0.16 sys=0.00, real=0.02 secs]
> 1691.703: [CMS-concurrent-sweep-start]
> 1691.705: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1691.705: [CMS-concurrent-reset-start]
> 1691.714: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1693.714: [GC [1 CMS-initial-mark: 12849K(21428K)] 125497K(139444K),
> 0.0126710 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1693.727: [CMS-concurrent-mark-start]
> 1693.744: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1693.744: [CMS-concurrent-preclean-start]
> 1693.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1693.745: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1698.747:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1698.748: [GC[YG occupancy: 112968 K (118016 K)]1698.748: [Rescan
> (parallel) , 0.0147370 secs]1698.762: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 125818K(139444K), 0.0148490 secs]
> [Times: user=0.17 sys=0.00, real=0.01 secs]
> 1698.763: [CMS-concurrent-sweep-start]
> 1698.764: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1698.764: [CMS-concurrent-reset-start]
> 1698.773: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1700.773: [GC [1 CMS-initial-mark: 12849K(21428K)] 125946K(139444K),
> 0.0128810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1700.786: [CMS-concurrent-mark-start]
> 1700.804: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1700.804: [CMS-concurrent-preclean-start]
> 1700.804: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1700.804: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1705.810:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1705.810: [GC[YG occupancy: 113417 K (118016 K)]1705.810: [Rescan
> (parallel) , 0.0146750 secs]1705.825: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 126267K(139444K), 0.0147760 secs]
> [Times: user=0.17 sys=0.00, real=0.02 secs]
> 1705.825: [CMS-concurrent-sweep-start]
> 1705.827: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1705.827: [CMS-concurrent-reset-start]
> 1705.836: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1707.836: [GC [1 CMS-initial-mark: 12849K(21428K)] 126395K(139444K),
> 0.0137570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1707.850: [CMS-concurrent-mark-start]
> 1707.866: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1707.866: [CMS-concurrent-preclean-start]
> 1707.867: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1707.867: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1712.878:
> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1712.878: [GC[YG occupancy: 113866 K (118016 K)]1712.878: [Rescan
> (parallel) , 0.0116340 secs]1712.890: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 126716K(139444K), 0.0117350 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1712.890: [CMS-concurrent-sweep-start]
> 1712.893: [CMS-concurrent-sweep: 0.002/0.003 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1712.893: [CMS-concurrent-reset-start]
> 1712.902: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1714.902: [GC [1 CMS-initial-mark: 12849K(21428K)] 126984K(139444K),
> 0.0134590 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1714.915: [CMS-concurrent-mark-start]
> 1714.933: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1714.933: [CMS-concurrent-preclean-start]
> 1714.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1714.934: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1719.940:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.71 sys=0.00, real=5.00 secs]
> 1719.940: [GC[YG occupancy: 114552 K (118016 K)]1719.940: [Rescan
> (parallel) , 0.0141320 secs]1719.955: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 127402K(139444K), 0.0142280 secs]
> [Times: user=0.16 sys=0.01, real=0.02 secs]
> 1719.955: [CMS-concurrent-sweep-start]
> 1719.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1719.956: [CMS-concurrent-reset-start]
> 1719.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1721.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 127530K(139444K),
> 0.0139120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1721.980: [CMS-concurrent-mark-start]
> 1721.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1721.996: [CMS-concurrent-preclean-start]
> 1721.997: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1721.997: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1727.010:
> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1727.010: [GC[YG occupancy: 115000 K (118016 K)]1727.010: [Rescan
> (parallel) , 0.0123190 secs]1727.023: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 12849K(21428K)] 127850K(139444K), 0.0124420 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1727.023: [CMS-concurrent-sweep-start]
> 1727.024: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1727.024: [CMS-concurrent-reset-start]
> 1727.033: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1729.034: [GC [1 CMS-initial-mark: 12849K(21428K)] 127978K(139444K),
> 0.0129330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1729.047: [CMS-concurrent-mark-start]
> 1729.064: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1729.064: [CMS-concurrent-preclean-start]
> 1729.064: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1729.064: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1734.075:
> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1734.075: [GC[YG occupancy: 115449 K (118016 K)]1734.075: [Rescan
> (parallel) , 0.0131600 secs]1734.088: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 12849K(21428K)] 128298K(139444K), 0.0132810 secs]
> [Times: user=0.16 sys=0.00, real=0.01 secs]
> 1734.089: [CMS-concurrent-sweep-start]
> 1734.091: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1734.091: [CMS-concurrent-reset-start]
> 1734.100: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1736.100: [GC [1 CMS-initial-mark: 12849K(21428K)] 128427K(139444K),
> 0.0141000 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1736.115: [CMS-concurrent-mark-start]
> 1736.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1736.131: [CMS-concurrent-preclean-start]
> 1736.131: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1736.131: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1741.139:
> [CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1741.139: [GC[YG occupancy: 115897 K (118016 K)]1741.139: [Rescan
> (parallel) , 0.0146880 secs]1741.154: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 128747K(139444K), 0.0148020 secs]
> [Times: user=0.17 sys=0.00, real=0.02 secs]
> 1741.154: [CMS-concurrent-sweep-start]
> 1741.156: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1741.156: [CMS-concurrent-reset-start]
> 1741.165: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1742.898: [GC [1 CMS-initial-mark: 12849K(21428K)] 129085K(139444K),
> 0.0144050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1742.913: [CMS-concurrent-mark-start]
> 1742.931: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1742.931: [CMS-concurrent-preclean-start]
> 1742.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1742.932: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1748.016:
> [CMS-concurrent-abortable-preclean: 0.728/5.084 secs] [Times:
> user=0.73 sys=0.00, real=5.09 secs]
> 1748.016: [GC[YG occupancy: 116596 K (118016 K)]1748.016: [Rescan
> (parallel) , 0.0149950 secs]1748.031: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 129446K(139444K), 0.0150970 secs]
> [Times: user=0.17 sys=0.00, real=0.01 secs]
> 1748.031: [CMS-concurrent-sweep-start]
> 1748.033: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1748.033: [CMS-concurrent-reset-start]
> 1748.041: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1750.042: [GC [1 CMS-initial-mark: 12849K(21428K)] 129574K(139444K),
> 0.0141840 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1750.056: [CMS-concurrent-mark-start]
> 1750.073: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1750.073: [CMS-concurrent-preclean-start]
> 1750.074: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1750.074: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1755.080:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1755.080: [GC[YG occupancy: 117044 K (118016 K)]1755.080: [Rescan
> (parallel) , 0.0155560 secs]1755.096: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 129894K(139444K), 0.0156580 secs]
> [Times: user=0.17 sys=0.00, real=0.02 secs]
> 1755.096: [CMS-concurrent-sweep-start]
> 1755.097: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1755.097: [CMS-concurrent-reset-start]
> 1755.105: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1756.660: [GC 1756.660: [ParNew: 117108K->482K(118016K), 0.0081410
> secs] 129958K->24535K(144568K), 0.0083030 secs] [Times: user=0.05
> sys=0.01, real=0.01 secs]
> 1756.668: [GC [1 CMS-initial-mark: 24053K(26552K)] 24599K(144568K),
> 0.0015280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1756.670: [CMS-concurrent-mark-start]
> 1756.688: [CMS-concurrent-mark: 0.016/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1756.688: [CMS-concurrent-preclean-start]
> 1756.689: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1756.689: [GC[YG occupancy: 546 K (118016 K)]1756.689: [Rescan
> (parallel) , 0.0018170 secs]1756.691: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(26552K)] 24599K(144568K), 0.0019050 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1756.691: [CMS-concurrent-sweep-start]
> 1756.694: [CMS-concurrent-sweep: 0.004/0.004 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1756.694: [CMS-concurrent-reset-start]
> 1756.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1758.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 25372K(158108K),
> 0.0014030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1758.705: [CMS-concurrent-mark-start]
> 1758.720: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 1758.720: [CMS-concurrent-preclean-start]
> 1758.720: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.01 sys=0.00, real=0.00 secs]
> 1758.721: [GC[YG occupancy: 1319 K (118016 K)]1758.721: [Rescan
> (parallel) , 0.0014940 secs]1758.722: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 25372K(158108K), 0.0015850 secs]
> [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1758.722: [CMS-concurrent-sweep-start]
> 1758.726: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1758.726: [CMS-concurrent-reset-start]
> 1758.735: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1760.735: [GC [1 CMS-initial-mark: 24053K(40092K)] 25565K(158108K),
> 0.0014530 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1760.737: [CMS-concurrent-mark-start]
> 1760.755: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1760.755: [CMS-concurrent-preclean-start]
> 1760.755: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1760.756: [GC[YG occupancy: 1512 K (118016 K)]1760.756: [Rescan
> (parallel) , 0.0014970 secs]1760.757: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 25565K(158108K), 0.0015980 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1760.757: [CMS-concurrent-sweep-start]
> 1760.761: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1760.761: [CMS-concurrent-reset-start]
> 1760.770: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1762.770: [GC [1 CMS-initial-mark: 24053K(40092K)] 25693K(158108K),
> 0.0013680 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1762.772: [CMS-concurrent-mark-start]
> 1762.788: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1762.788: [CMS-concurrent-preclean-start]
> 1762.788: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1762.788: [GC[YG occupancy: 1640 K (118016 K)]1762.789: [Rescan
> (parallel) , 0.0020360 secs]1762.791: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 25693K(158108K), 0.0021450 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1762.791: [CMS-concurrent-sweep-start]
> 1762.794: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1762.794: [CMS-concurrent-reset-start]
> 1762.803: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1764.804: [GC [1 CMS-initial-mark: 24053K(40092K)] 26747K(158108K),
> 0.0014620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1764.805: [CMS-concurrent-mark-start]
> 1764.819: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1764.819: [CMS-concurrent-preclean-start]
> 1764.820: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1764.820: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1769.835:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 1769.835: [GC[YG occupancy: 3015 K (118016 K)]1769.835: [Rescan
> (parallel) , 0.0010360 secs]1769.836: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 27068K(158108K), 0.0011310 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1769.837: [CMS-concurrent-sweep-start]
> 1769.840: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1769.840: [CMS-concurrent-reset-start]
> 1769.849: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1771.850: [GC [1 CMS-initial-mark: 24053K(40092K)] 27196K(158108K),
> 0.0014740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1771.851: [CMS-concurrent-mark-start]
> 1771.868: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1771.868: [CMS-concurrent-preclean-start]
> 1771.868: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1771.868: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1776.913:
> [CMS-concurrent-abortable-preclean: 0.112/5.044 secs] [Times:
> user=0.12 sys=0.00, real=5.04 secs]
> 1776.913: [GC[YG occupancy: 4052 K (118016 K)]1776.913: [Rescan
> (parallel) , 0.0017790 secs]1776.915: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 28105K(158108K), 0.0018790 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1776.915: [CMS-concurrent-sweep-start]
> 1776.918: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1776.918: [CMS-concurrent-reset-start]
> 1776.927: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1778.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 28233K(158108K),
> 0.0015470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1778.929: [CMS-concurrent-mark-start]
> 1778.947: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1778.947: [CMS-concurrent-preclean-start]
> 1778.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1778.947: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1783.963:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1783.963: [GC[YG occupancy: 4505 K (118016 K)]1783.963: [Rescan
> (parallel) , 0.0014480 secs]1783.965: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 28558K(158108K), 0.0015470 secs]
> [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1783.965: [CMS-concurrent-sweep-start]
> 1783.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1783.968: [CMS-concurrent-reset-start]
> 1783.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1785.978: [GC [1 CMS-initial-mark: 24053K(40092K)] 28686K(158108K),
> 0.0015760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1785.979: [CMS-concurrent-mark-start]
> 1785.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1785.996: [CMS-concurrent-preclean-start]
> 1785.996: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1785.996: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1791.009:
> [CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1791.010: [GC[YG occupancy: 4954 K (118016 K)]1791.010: [Rescan
> (parallel) , 0.0020030 secs]1791.012: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 29007K(158108K), 0.0021040 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1791.012: [CMS-concurrent-sweep-start]
> 1791.015: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1791.015: [CMS-concurrent-reset-start]
> 1791.023: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1793.023: [GC [1 CMS-initial-mark: 24053K(40092K)] 29136K(158108K),
> 0.0017200 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1793.025: [CMS-concurrent-mark-start]
> 1793.044: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.08
> sys=0.00, real=0.02 secs]
> 1793.044: [CMS-concurrent-preclean-start]
> 1793.045: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1793.045: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1798.137:
> [CMS-concurrent-abortable-preclean: 0.112/5.093 secs] [Times:
> user=0.11 sys=0.00, real=5.09 secs]
> 1798.137: [GC[YG occupancy: 6539 K (118016 K)]1798.137: [Rescan
> (parallel) , 0.0016650 secs]1798.139: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 30592K(158108K), 0.0017600 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1798.139: [CMS-concurrent-sweep-start]
> 1798.143: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1798.143: [CMS-concurrent-reset-start]
> 1798.152: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1800.152: [GC [1 CMS-initial-mark: 24053K(40092K)] 30721K(158108K),
> 0.0016650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1800.154: [CMS-concurrent-mark-start]
> 1800.170: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1800.170: [CMS-concurrent-preclean-start]
> 1800.171: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1800.171: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1805.181:
> [CMS-concurrent-abortable-preclean: 0.110/5.010 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 1805.181: [GC[YG occupancy: 8090 K (118016 K)]1805.181: [Rescan
> (parallel) , 0.0018850 secs]1805.183: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 32143K(158108K), 0.0019860 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1805.183: [CMS-concurrent-sweep-start]
> 1805.187: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1805.187: [CMS-concurrent-reset-start]
> 1805.196: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1807.196: [GC [1 CMS-initial-mark: 24053K(40092K)] 32272K(158108K),
> 0.0018760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1807.198: [CMS-concurrent-mark-start]
> 1807.216: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1807.216: [CMS-concurrent-preclean-start]
> 1807.216: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1807.216: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1812.232:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1812.232: [GC[YG occupancy: 8543 K (118016 K)]1812.232: [Rescan
> (parallel) , 0.0020890 secs]1812.234: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 32596K(158108K), 0.0021910 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1812.234: [CMS-concurrent-sweep-start]
> 1812.238: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1812.238: [CMS-concurrent-reset-start]
> 1812.247: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1812.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 32661K(158108K),
> 0.0019710 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1812.930: [CMS-concurrent-mark-start]
> 1812.947: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1812.947: [CMS-concurrent-preclean-start]
> 1812.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1812.948: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1817.963:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1817.963: [GC[YG occupancy: 8928 K (118016 K)]1817.963: [Rescan
> (parallel) , 0.0011790 secs]1817.964: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 32981K(158108K), 0.0012750 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1817.964: [CMS-concurrent-sweep-start]
> 1817.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1817.968: [CMS-concurrent-reset-start]
> 1817.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1819.977: [GC [1 CMS-initial-mark: 24053K(40092K)] 33110K(158108K),
> 0.0018900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1819.979: [CMS-concurrent-mark-start]
> 1819.996: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1819.997: [CMS-concurrent-preclean-start]
> 1819.997: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1819.997: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1825.012:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1825.013: [GC[YG occupancy: 9377 K (118016 K)]1825.013: [Rescan
> (parallel) , 0.0020580 secs]1825.015: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 33431K(158108K), 0.0021510 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1825.015: [CMS-concurrent-sweep-start]
> 1825.018: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1825.018: [CMS-concurrent-reset-start]
> 1825.027: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1827.028: [GC [1 CMS-initial-mark: 24053K(40092K)] 33559K(158108K),
> 0.0019140 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1827.030: [CMS-concurrent-mark-start]
> 1827.047: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1827.047: [CMS-concurrent-preclean-start]
> 1827.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1827.047: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1832.066:
> [CMS-concurrent-abortable-preclean: 0.109/5.018 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1832.066: [GC[YG occupancy: 9827 K (118016 K)]1832.066: [Rescan
> (parallel) , 0.0019440 secs]1832.068: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 33880K(158108K), 0.0020410 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1832.068: [CMS-concurrent-sweep-start]
> 1832.071: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1832.071: [CMS-concurrent-reset-start]
> 1832.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1832.935: [GC [1 CMS-initial-mark: 24053K(40092K)] 34093K(158108K),
> 0.0019830 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1832.937: [CMS-concurrent-mark-start]
> 1832.954: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1832.954: [CMS-concurrent-preclean-start]
> 1832.955: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1832.955: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1837.970:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1837.970: [GC[YG occupancy: 10349 K (118016 K)]1837.970: [Rescan
> (parallel) , 0.0019670 secs]1837.972: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 34402K(158108K), 0.0020800 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1837.972: [CMS-concurrent-sweep-start]
> 1837.976: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1837.976: [CMS-concurrent-reset-start]
> 1837.985: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1839.985: [GC [1 CMS-initial-mark: 24053K(40092K)] 34531K(158108K),
> 0.0020220 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1839.987: [CMS-concurrent-mark-start]
> 1840.005: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.06
> sys=0.01, real=0.02 secs]
> 1840.005: [CMS-concurrent-preclean-start]
> 1840.006: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1840.006: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1845.018:
> [CMS-concurrent-abortable-preclean: 0.106/5.012 secs] [Times:
> user=0.10 sys=0.01, real=5.01 secs]
> 1845.018: [GC[YG occupancy: 10798 K (118016 K)]1845.018: [Rescan
> (parallel) , 0.0015500 secs]1845.019: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 34851K(158108K), 0.0016500 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1845.020: [CMS-concurrent-sweep-start]
> 1845.023: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1845.023: [CMS-concurrent-reset-start]
> 1845.032: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1847.032: [GC [1 CMS-initial-mark: 24053K(40092K)] 34980K(158108K),
> 0.0020600 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1847.035: [CMS-concurrent-mark-start]
> 1847.051: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1847.051: [CMS-concurrent-preclean-start]
> 1847.052: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1847.052: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1852.067:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 1852.067: [GC[YG occupancy: 11247 K (118016 K)]1852.067: [Rescan
> (parallel) , 0.0011880 secs]1852.069: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 35300K(158108K), 0.0012900 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1852.069: [CMS-concurrent-sweep-start]
> 1852.072: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1852.072: [CMS-concurrent-reset-start]
> 1852.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1854.082: [GC [1 CMS-initial-mark: 24053K(40092K)] 35429K(158108K),
> 0.0021010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1854.084: [CMS-concurrent-mark-start]
> 1854.100: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1854.100: [CMS-concurrent-preclean-start]
> 1854.101: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1854.101: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1859.116:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1859.116: [GC[YG occupancy: 11701 K (118016 K)]1859.117: [Rescan
> (parallel) , 0.0010230 secs]1859.118: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 24053K(40092K)] 35754K(158108K), 0.0011230 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1859.118: [CMS-concurrent-sweep-start]
> 1859.121: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1859.121: [CMS-concurrent-reset-start]
> 1859.130: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1861.131: [GC [1 CMS-initial-mark: 24053K(40092K)] 35882K(158108K),
> 0.0021240 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1861.133: [CMS-concurrent-mark-start]
> 1861.149: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1861.149: [CMS-concurrent-preclean-start]
> 1861.150: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1861.150: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1866.220:
> [CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
> user=0.12 sys=0.00, real=5.07 secs]
> 1866.220: [GC[YG occupancy: 12388 K (118016 K)]1866.220: [Rescan
> (parallel) , 0.0027090 secs]1866.223: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 36441K(158108K), 0.0028070 secs]
> [Times: user=0.02 sys=0.00, real=0.01 secs]
> 1866.223: [CMS-concurrent-sweep-start]
> 1866.227: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1866.227: [CMS-concurrent-reset-start]
> 1866.236: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1868.236: [GC [1 CMS-initial-mark: 24053K(40092K)] 36569K(158108K),
> 0.0023650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1868.239: [CMS-concurrent-mark-start]
> 1868.256: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1868.256: [CMS-concurrent-preclean-start]
> 1868.257: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1868.257: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1873.267:
> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
> user=0.13 sys=0.00, real=5.01 secs]
> 1873.268: [GC[YG occupancy: 12837 K (118016 K)]1873.268: [Rescan
> (parallel) , 0.0018720 secs]1873.270: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 36890K(158108K), 0.0019730 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1873.270: [CMS-concurrent-sweep-start]
> 1873.273: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1873.273: [CMS-concurrent-reset-start]
> 1873.282: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1875.283: [GC [1 CMS-initial-mark: 24053K(40092K)] 37018K(158108K),
> 0.0024410 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1875.285: [CMS-concurrent-mark-start]
> 1875.302: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1875.302: [CMS-concurrent-preclean-start]
> 1875.302: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1875.303: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1880.318:
> [CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1880.318: [GC[YG occupancy: 13286 K (118016 K)]1880.318: [Rescan
> (parallel) , 0.0023860 secs]1880.321: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 37339K(158108K), 0.0024910 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1880.321: [CMS-concurrent-sweep-start]
> 1880.324: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1880.324: [CMS-concurrent-reset-start]
> 1880.333: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1882.334: [GC [1 CMS-initial-mark: 24053K(40092K)] 37467K(158108K),
> 0.0024090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1882.336: [CMS-concurrent-mark-start]
> 1882.352: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1882.352: [CMS-concurrent-preclean-start]
> 1882.353: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1882.353: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1887.368:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1887.368: [GC[YG occupancy: 13739 K (118016 K)]1887.368: [Rescan
> (parallel) , 0.0022370 secs]1887.370: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 37792K(158108K), 0.0023360 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1887.371: [CMS-concurrent-sweep-start]
> 1887.374: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1887.374: [CMS-concurrent-reset-start]
> 1887.383: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1889.384: [GC [1 CMS-initial-mark: 24053K(40092K)] 37920K(158108K),
> 0.0024690 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1889.386: [CMS-concurrent-mark-start]
> 1889.404: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1889.404: [CMS-concurrent-preclean-start]
> 1889.405: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.01 sys=0.00, real=0.00 secs]
> 1889.405: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1894.488:
> [CMS-concurrent-abortable-preclean: 0.112/5.083 secs] [Times:
> user=0.11 sys=0.00, real=5.08 secs]
> 1894.488: [GC[YG occupancy: 14241 K (118016 K)]1894.488: [Rescan
> (parallel) , 0.0020670 secs]1894.490: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 38294K(158108K), 0.0021630 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1894.490: [CMS-concurrent-sweep-start]
> 1894.494: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1894.494: [CMS-concurrent-reset-start]
> 1894.503: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1896.503: [GC [1 CMS-initial-mark: 24053K(40092K)] 38422K(158108K),
> 0.0025430 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1896.506: [CMS-concurrent-mark-start]
> 1896.524: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1896.524: [CMS-concurrent-preclean-start]
> 1896.525: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1896.525: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1901.540:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1901.540: [GC[YG occupancy: 14690 K (118016 K)]1901.540: [Rescan
> (parallel) , 0.0014810 secs]1901.542: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 38743K(158108K), 0.0015820 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1901.542: [CMS-concurrent-sweep-start]
> 1901.545: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1901.545: [CMS-concurrent-reset-start]
> 1901.555: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1903.555: [GC [1 CMS-initial-mark: 24053K(40092K)] 38871K(158108K),
> 0.0025990 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1903.558: [CMS-concurrent-mark-start]
> 1903.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1903.575: [CMS-concurrent-preclean-start]
> 1903.576: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1903.576: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1908.586:
> [CMS-concurrent-abortable-preclean: 0.105/5.010 secs] [Times:
> user=0.10 sys=0.00, real=5.01 secs]
> 1908.587: [GC[YG occupancy: 15207 K (118016 K)]1908.587: [Rescan
> (parallel) , 0.0026240 secs]1908.589: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 39260K(158108K), 0.0027260 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1908.589: [CMS-concurrent-sweep-start]
> 1908.593: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1908.593: [CMS-concurrent-reset-start]
> 1908.602: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1910.602: [GC [1 CMS-initial-mark: 24053K(40092K)] 39324K(158108K),
> 0.0025610 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1910.605: [CMS-concurrent-mark-start]
> 1910.621: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1910.621: [CMS-concurrent-preclean-start]
> 1910.622: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.01 sys=0.00, real=0.00 secs]
> 1910.622: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1915.684:
> [CMS-concurrent-abortable-preclean: 0.112/5.062 secs] [Times:
> user=0.11 sys=0.00, real=5.07 secs]
> 1915.684: [GC[YG occupancy: 15592 K (118016 K)]1915.684: [Rescan
> (parallel) , 0.0023940 secs]1915.687: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 39645K(158108K), 0.0025050 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1915.687: [CMS-concurrent-sweep-start]
> 1915.690: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1915.690: [CMS-concurrent-reset-start]
> 1915.699: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1917.700: [GC [1 CMS-initial-mark: 24053K(40092K)] 39838K(158108K),
> 0.0025010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1917.702: [CMS-concurrent-mark-start]
> 1917.719: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1917.719: [CMS-concurrent-preclean-start]
> 1917.719: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1917.719: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1922.735:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.01, real=5.02 secs]
> 1922.735: [GC[YG occupancy: 16198 K (118016 K)]1922.735: [Rescan
> (parallel) , 0.0028750 secs]1922.738: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 40251K(158108K), 0.0029760 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1922.738: [CMS-concurrent-sweep-start]
> 1922.741: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1922.741: [CMS-concurrent-reset-start]
> 1922.751: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1922.957: [GC [1 CMS-initial-mark: 24053K(40092K)] 40324K(158108K),
> 0.0027380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1922.960: [CMS-concurrent-mark-start]
> 1922.978: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1922.978: [CMS-concurrent-preclean-start]
> 1922.979: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1922.979: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1927.994:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 1927.995: [GC[YG occupancy: 16645 K (118016 K)]1927.995: [Rescan
> (parallel) , 0.0013210 secs]1927.996: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 40698K(158108K), 0.0017610 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1927.996: [CMS-concurrent-sweep-start]
> 1928.000: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1928.000: [CMS-concurrent-reset-start]
> 1928.009: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1930.009: [GC [1 CMS-initial-mark: 24053K(40092K)] 40826K(158108K),
> 0.0028310 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1930.012: [CMS-concurrent-mark-start]
> 1930.028: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1930.028: [CMS-concurrent-preclean-start]
> 1930.029: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1930.029: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1935.044:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1935.045: [GC[YG occupancy: 17098 K (118016 K)]1935.045: [Rescan
> (parallel) , 0.0015440 secs]1935.046: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 41151K(158108K), 0.0016490 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1935.046: [CMS-concurrent-sweep-start]
> 1935.050: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1935.050: [CMS-concurrent-reset-start]
> 1935.059: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1937.059: [GC [1 CMS-initial-mark: 24053K(40092K)] 41279K(158108K),
> 0.0028290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1937.062: [CMS-concurrent-mark-start]
> 1937.079: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1937.079: [CMS-concurrent-preclean-start]
> 1937.079: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1937.079: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1942.095:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.01, real=5.02 secs]
> 1942.095: [GC[YG occupancy: 17547 K (118016 K)]1942.095: [Rescan
> (parallel) , 0.0030270 secs]1942.098: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 41600K(158108K), 0.0031250 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1942.098: [CMS-concurrent-sweep-start]
> 1942.101: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1942.101: [CMS-concurrent-reset-start]
> 1942.111: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1944.111: [GC [1 CMS-initial-mark: 24053K(40092K)] 41728K(158108K),
> 0.0028080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1944.114: [CMS-concurrent-mark-start]
> 1944.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1944.130: [CMS-concurrent-preclean-start]
> 1944.131: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1944.131: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1949.146:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1949.146: [GC[YG occupancy: 17996 K (118016 K)]1949.146: [Rescan
> (parallel) , 0.0028800 secs]1949.149: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 42049K(158108K), 0.0029810 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1949.149: [CMS-concurrent-sweep-start]
> 1949.152: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1949.152: [CMS-concurrent-reset-start]
> 1949.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1951.162: [GC [1 CMS-initial-mark: 24053K(40092K)] 42177K(158108K),
> 0.0028760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1951.165: [CMS-concurrent-mark-start]
> 1951.184: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1951.184: [CMS-concurrent-preclean-start]
> 1951.184: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1951.184: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1956.244:
> [CMS-concurrent-abortable-preclean: 0.112/5.059 secs] [Times:
> user=0.11 sys=0.01, real=5.05 secs]
> 1956.244: [GC[YG occupancy: 18498 K (118016 K)]1956.244: [Rescan
> (parallel) , 0.0019760 secs]1956.246: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 42551K(158108K), 0.0020750 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 1956.246: [CMS-concurrent-sweep-start]
> 1956.249: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1956.249: [CMS-concurrent-reset-start]
> 1956.259: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1958.259: [GC [1 CMS-initial-mark: 24053K(40092K)] 42747K(158108K),
> 0.0029160 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1958.262: [CMS-concurrent-mark-start]
> 1958.279: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1958.279: [CMS-concurrent-preclean-start]
> 1958.279: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1958.279: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1963.295:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1963.295: [GC[YG occupancy: 18951 K (118016 K)]1963.295: [Rescan
> (parallel) , 0.0020140 secs]1963.297: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 43004K(158108K), 0.0021100 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1963.297: [CMS-concurrent-sweep-start]
> 1963.300: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1963.300: [CMS-concurrent-reset-start]
> 1963.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1965.310: [GC [1 CMS-initial-mark: 24053K(40092K)] 43132K(158108K),
> 0.0029420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1965.313: [CMS-concurrent-mark-start]
> 1965.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1965.329: [CMS-concurrent-preclean-start]
> 1965.330: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1965.330: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1970.345:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 1970.345: [GC[YG occupancy: 19400 K (118016 K)]1970.345: [Rescan
> (parallel) , 0.0031610 secs]1970.349: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 43453K(158108K), 0.0032580 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1970.349: [CMS-concurrent-sweep-start]
> 1970.352: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1970.352: [CMS-concurrent-reset-start]
> 1970.361: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1972.362: [GC [1 CMS-initial-mark: 24053K(40092K)] 43581K(158108K),
> 0.0029960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1972.365: [CMS-concurrent-mark-start]
> 1972.381: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1972.381: [CMS-concurrent-preclean-start]
> 1972.382: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1972.382: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1977.397:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1977.398: [GC[YG occupancy: 19849 K (118016 K)]1977.398: [Rescan
> (parallel) , 0.0018110 secs]1977.399: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 43902K(158108K), 0.0019100 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1977.400: [CMS-concurrent-sweep-start]
> 1977.403: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1977.403: [CMS-concurrent-reset-start]
> 1977.412: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1979.413: [GC [1 CMS-initial-mark: 24053K(40092K)] 44031K(158108K),
> 0.0030240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1979.416: [CMS-concurrent-mark-start]
> 1979.434: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
> sys=0.00, real=0.02 secs]
> 1979.434: [CMS-concurrent-preclean-start]
> 1979.434: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1979.434: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1984.511:
> [CMS-concurrent-abortable-preclean: 0.112/5.077 secs] [Times:
> user=0.12 sys=0.00, real=5.07 secs]
> 1984.511: [GC[YG occupancy: 20556 K (118016 K)]1984.511: [Rescan
> (parallel) , 0.0032740 secs]1984.514: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 44609K(158108K), 0.0033720 secs]
> [Times: user=0.03 sys=0.00, real=0.01 secs]
> 1984.515: [CMS-concurrent-sweep-start]
> 1984.518: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1984.518: [CMS-concurrent-reset-start]
> 1984.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1986.528: [GC [1 CMS-initial-mark: 24053K(40092K)] 44737K(158108K),
> 0.0032890 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1986.531: [CMS-concurrent-mark-start]
> 1986.548: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1986.548: [CMS-concurrent-preclean-start]
> 1986.548: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1986.548: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1991.564:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1991.564: [GC[YG occupancy: 21005 K (118016 K)]1991.564: [Rescan
> (parallel) , 0.0022540 secs]1991.566: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 45058K(158108K), 0.0023650 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1991.566: [CMS-concurrent-sweep-start]
> 1991.570: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1991.570: [CMS-concurrent-reset-start]
> 1991.579: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1993.579: [GC [1 CMS-initial-mark: 24053K(40092K)] 45187K(158108K),
> 0.0032480 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1993.583: [CMS-concurrent-mark-start]
> 1993.599: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1993.599: [CMS-concurrent-preclean-start]
> 1993.600: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1993.600: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 1998.688:
> [CMS-concurrent-abortable-preclean: 0.112/5.089 secs] [Times:
> user=0.10 sys=0.01, real=5.09 secs]
> 1998.689: [GC[YG occupancy: 21454 K (118016 K)]1998.689: [Rescan
> (parallel) , 0.0025510 secs]1998.691: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 45507K(158108K), 0.0026500 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 1998.691: [CMS-concurrent-sweep-start]
> 1998.695: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1998.695: [CMS-concurrent-reset-start]
> 1998.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 2000.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 45636K(158108K),
> 0.0033350 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2000.708: [CMS-concurrent-mark-start]
> 2000.726: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2000.726: [CMS-concurrent-preclean-start]
> 2000.726: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2000.726: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2005.742:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 2005.742: [GC[YG occupancy: 21968 K (118016 K)]2005.742: [Rescan
> (parallel) , 0.0027300 secs]2005.745: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 46021K(158108K), 0.0028560 secs]
> [Times: user=0.02 sys=0.01, real=0.01 secs]
> 2005.745: [CMS-concurrent-sweep-start]
> 2005.748: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2005.748: [CMS-concurrent-reset-start]
> 2005.757: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.01, real=0.01 secs]
> 2007.758: [GC [1 CMS-initial-mark: 24053K(40092K)] 46217K(158108K),
> 0.0033290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2007.761: [CMS-concurrent-mark-start]
> 2007.778: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2007.778: [CMS-concurrent-preclean-start]
> 2007.778: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2007.778: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2012.794:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 2012.794: [GC[YG occupancy: 22421 K (118016 K)]2012.794: [Rescan
> (parallel) , 0.0036890 secs]2012.798: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 46474K(158108K), 0.0037910 secs]
> [Times: user=0.02 sys=0.01, real=0.00 secs]
> 2012.798: [CMS-concurrent-sweep-start]
> 2012.801: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2012.801: [CMS-concurrent-reset-start]
> 2012.810: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2012.980: [GC [1 CMS-initial-mark: 24053K(40092K)] 46547K(158108K),
> 0.0033990 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 2012.984: [CMS-concurrent-mark-start]
> 2013.004: [CMS-concurrent-mark: 0.019/0.020 secs] [Times: user=0.06
> sys=0.01, real=0.02 secs]
> 2013.004: [CMS-concurrent-preclean-start]
> 2013.005: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2013.005: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2018.020:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 2018.020: [GC[YG occupancy: 22867 K (118016 K)]2018.020: [Rescan
> (parallel) , 0.0025180 secs]2018.023: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 46920K(158108K), 0.0026190 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 2018.023: [CMS-concurrent-sweep-start]
> 2018.026: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2018.026: [CMS-concurrent-reset-start]
> 2018.036: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2020.036: [GC [1 CMS-initial-mark: 24053K(40092K)] 47048K(158108K),
> 0.0034020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2020.039: [CMS-concurrent-mark-start]
> 2020.057: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2020.057: [CMS-concurrent-preclean-start]
> 2020.058: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2020.058: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2025.073:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2025.073: [GC[YG occupancy: 23316 K (118016 K)]2025.073: [Rescan
> (parallel) , 0.0020110 secs]2025.075: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 47369K(158108K), 0.0021080 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 2025.075: [CMS-concurrent-sweep-start]
> 2025.079: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2025.079: [CMS-concurrent-reset-start]
> 2025.088: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2027.088: [GC [1 CMS-initial-mark: 24053K(40092K)] 47498K(158108K),
> 0.0034100 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2027.092: [CMS-concurrent-mark-start]
> 2027.108: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2027.108: [CMS-concurrent-preclean-start]
> 2027.109: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2027.109: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2032.120:
> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
> user=0.10 sys=0.00, real=5.01 secs]
> 2032.120: [GC[YG occupancy: 23765 K (118016 K)]2032.120: [Rescan
> (parallel) , 0.0025970 secs]2032.123: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 47818K(158108K), 0.0026940 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 2032.123: [CMS-concurrent-sweep-start]
> 2032.126: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2032.126: [CMS-concurrent-reset-start]
> 2032.135: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2034.136: [GC [1 CMS-initial-mark: 24053K(40092K)] 47951K(158108K),
> 0.0034720 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2034.139: [CMS-concurrent-mark-start]
> 2034.156: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2034.156: [CMS-concurrent-preclean-start]
> 2034.156: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2034.156: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2039.171:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2039.172: [GC[YG occupancy: 24218 K (118016 K)]2039.172: [Rescan
> (parallel) , 0.0038590 secs]2039.176: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 48271K(158108K), 0.0039560 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2039.176: [CMS-concurrent-sweep-start]
> 2039.179: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2039.179: [CMS-concurrent-reset-start]
> 2039.188: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2041.188: [GC [1 CMS-initial-mark: 24053K(40092K)] 48400K(158108K),
> 0.0035110 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2041.192: [CMS-concurrent-mark-start]
> 2041.209: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2041.209: [CMS-concurrent-preclean-start]
> 2041.209: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2041.209: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2046.268:
> [CMS-concurrent-abortable-preclean: 0.108/5.058 secs] [Times:
> user=0.12 sys=0.00, real=5.06 secs]
> 2046.268: [GC[YG occupancy: 24813 K (118016 K)]2046.268: [Rescan
> (parallel) , 0.0042050 secs]2046.272: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 48866K(158108K), 0.0043070 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2046.272: [CMS-concurrent-sweep-start]
> 2046.275: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2046.275: [CMS-concurrent-reset-start]
> 2046.285: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2048.285: [GC [1 CMS-initial-mark: 24053K(40092K)] 48994K(158108K),
> 0.0037700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2048.289: [CMS-concurrent-mark-start]
> 2048.307: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2048.307: [CMS-concurrent-preclean-start]
> 2048.307: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2048.307: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2053.323:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2053.323: [GC[YG occupancy: 25262 K (118016 K)]2053.323: [Rescan
> (parallel) , 0.0030780 secs]2053.326: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 49315K(158108K), 0.0031760 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2053.326: [CMS-concurrent-sweep-start]
> 2053.329: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2053.329: [CMS-concurrent-reset-start]
> 2053.338: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2055.339: [GC [1 CMS-initial-mark: 24053K(40092K)] 49444K(158108K),
> 0.0037730 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2055.343: [CMS-concurrent-mark-start]
> 2055.359: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2055.359: [CMS-concurrent-preclean-start]
> 2055.360: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2055.360: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2060.373:
> [CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2060.373: [GC[YG occupancy: 25715 K (118016 K)]2060.373: [Rescan
> (parallel) , 0.0037090 secs]2060.377: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 49768K(158108K), 0.0038110 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2060.377: [CMS-concurrent-sweep-start]
> 2060.380: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2060.380: [CMS-concurrent-reset-start]
> 2060.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2062.390: [GC [1 CMS-initial-mark: 24053K(40092K)] 49897K(158108K),
> 0.0037860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2062.394: [CMS-concurrent-mark-start]
> 2062.410: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2062.410: [CMS-concurrent-preclean-start]
> 2062.411: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2062.411: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2067.426:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 2067.427: [GC[YG occupancy: 26231 K (118016 K)]2067.427: [Rescan
> (parallel) , 0.0031980 secs]2067.430: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 50284K(158108K), 0.0033100 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2067.430: [CMS-concurrent-sweep-start]
> 2067.433: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2067.433: [CMS-concurrent-reset-start]
> 2067.443: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2069.443: [GC [1 CMS-initial-mark: 24053K(40092K)] 50412K(158108K),
> 0.0038060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 2069.447: [CMS-concurrent-mark-start]
> 2069.465: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2069.465: [CMS-concurrent-preclean-start]
> 2069.465: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2069.465: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2074.535:
> [CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
> user=0.12 sys=0.00, real=5.06 secs]
> 2074.535: [GC[YG occupancy: 26749 K (118016 K)]2074.535: [Rescan
> (parallel) , 0.0040450 secs]2074.539: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 50802K(158108K), 0.0041460 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2074.539: [CMS-concurrent-sweep-start]
> 2074.543: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2074.543: [CMS-concurrent-reset-start]
> 2074.552: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2076.552: [GC [1 CMS-initial-mark: 24053K(40092K)] 50930K(158108K),
> 0.0038960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 2076.556: [CMS-concurrent-mark-start]
> 2076.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2076.575: [CMS-concurrent-preclean-start]
> 2076.575: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2076.575: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2081.590:
> [CMS-concurrent-abortable-preclean: 0.109/5.014 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2081.590: [GC[YG occupancy: 27198 K (118016 K)]2081.590: [Rescan
> (parallel) , 0.0042420 secs]2081.594: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 51251K(158108K), 0.0043450 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2081.594: [CMS-concurrent-sweep-start]
> 2081.597: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2081.597: [CMS-concurrent-reset-start]
> 2081.607: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2083.607: [GC [1 CMS-initial-mark: 24053K(40092K)] 51447K(158108K),
> 0.0038630 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2083.611: [CMS-concurrent-mark-start]
> 2083.628: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2083.628: [CMS-concurrent-preclean-start]
> 2083.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2083.628: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2088.642:
> [CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2088.642: [GC[YG occupancy: 27651 K (118016 K)]2088.642: [Rescan
> (parallel) , 0.0031520 secs]2088.645: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 51704K(158108K), 0.0032520 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2088.645: [CMS-concurrent-sweep-start]
> 2088.649: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2088.649: [CMS-concurrent-reset-start]
> 2088.658: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2090.658: [GC [1 CMS-initial-mark: 24053K(40092K)] 51832K(158108K),
> 0.0039130 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2090.662: [CMS-concurrent-mark-start]
> 2090.678: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2090.678: [CMS-concurrent-preclean-start]
> 2090.679: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2090.679: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2095.690:
> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2095.690: [GC[YG occupancy: 28100 K (118016 K)]2095.690: [Rescan
> (parallel) , 0.0024460 secs]2095.693: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 52153K(158108K), 0.0025460 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 2095.693: [CMS-concurrent-sweep-start]
> 2095.696: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2095.696: [CMS-concurrent-reset-start]
> 2095.705: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2096.616: [GC [1 CMS-initial-mark: 24053K(40092K)] 53289K(158108K),
> 0.0039340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2096.620: [CMS-concurrent-mark-start]
> 2096.637: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2096.637: [CMS-concurrent-preclean-start]
> 2096.638: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2096.638: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2101.654:
> [CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 2101.654: [GC[YG occupancy: 29557 K (118016 K)]2101.654: [Rescan
> (parallel) , 0.0034020 secs]2101.657: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 53610K(158108K), 0.0035000 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2101.657: [CMS-concurrent-sweep-start]
> 2101.661: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2101.661: [CMS-concurrent-reset-start]
> 2101.670: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2103.004: [GC [1 CMS-initial-mark: 24053K(40092K)] 53997K(158108K),
> 0.0042590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2103.009: [CMS-concurrent-mark-start]
> 2103.027: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2103.027: [CMS-concurrent-preclean-start]
> 2103.028: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2103.028: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2108.043:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.10 sys=0.01, real=5.02 secs]
> 2108.043: [GC[YG occupancy: 30385 K (118016 K)]2108.044: [Rescan
> (parallel) , 0.0048950 secs]2108.048: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 54438K(158108K), 0.0049930 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2108.049: [CMS-concurrent-sweep-start]
> 2108.052: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2108.052: [CMS-concurrent-reset-start]
> 2108.061: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2110.062: [GC [1 CMS-initial-mark: 24053K(40092K)] 54502K(158108K),
> 0.0042120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 2110.066: [CMS-concurrent-mark-start]
> 2110.084: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2110.084: [CMS-concurrent-preclean-start]
> 2110.085: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2110.085: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2115.100:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2115.101: [GC[YG occupancy: 30770 K (118016 K)]2115.101: [Rescan
> (parallel) , 0.0049040 secs]2115.106: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 54823K(158108K), 0.0050080 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2115.106: [CMS-concurrent-sweep-start]
> 2115.109: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2115.109: [CMS-concurrent-reset-start]
> 2115.118: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2117.118: [GC [1 CMS-initial-mark: 24053K(40092K)] 54952K(158108K),
> 0.0042490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2117.123: [CMS-concurrent-mark-start]
> 2117.139: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2117.139: [CMS-concurrent-preclean-start]
> 2117.140: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2117.140: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2122.155:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 2122.155: [GC[YG occupancy: 31219 K (118016 K)]2122.155: [Rescan
> (parallel) , 0.0036460 secs]2122.159: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 55272K(158108K), 0.0037440 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2122.159: [CMS-concurrent-sweep-start]
> 2122.162: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2122.162: [CMS-concurrent-reset-start]
> 2122.172: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2124.172: [GC [1 CMS-initial-mark: 24053K(40092K)] 55401K(158108K),
> 0.0043010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 2124.176: [CMS-concurrent-mark-start]
> 2124.195: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2124.195: [CMS-concurrent-preclean-start]
> 2124.195: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2124.195: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2129.211:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 2129.211: [GC[YG occupancy: 31669 K (118016 K)]2129.211: [Rescan
> (parallel) , 0.0049870 secs]2129.216: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 55722K(158108K), 0.0050860 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2129.216: [CMS-concurrent-sweep-start]
> 2129.219: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 2129.219: [CMS-concurrent-reset-start]
> 2129.228: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2131.229: [GC [1 CMS-initial-mark: 24053K(40092K)] 55850K(158108K),
> 0.0042340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2131.233: [CMS-concurrent-mark-start]
> 2131.249: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2131.249: [CMS-concurrent-preclean-start]
> 2131.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2131.249: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2136.292:
> [CMS-concurrent-abortable-preclean: 0.108/5.042 secs] [Times:
> user=0.11 sys=0.00, real=5.04 secs]
> 2136.292: [GC[YG occupancy: 32174 K (118016 K)]2136.292: [Rescan
> (parallel) , 0.0037250 secs]2136.296: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 56227K(158108K), 0.0038250 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 2136.296: [CMS-concurrent-sweep-start]
> 2136.299: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2136.299: [CMS-concurrent-reset-start]
> 2136.308: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2138.309: [GC [1 CMS-initial-mark: 24053K(40092K)] 56356K(158108K),
> 0.0043040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2138.313: [CMS-concurrent-mark-start]
> 2138.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
> sys=0.01, real=0.02 secs]
> 2138.329: [CMS-concurrent-preclean-start]
> 2138.329: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2138.329: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2143.341:
> [CMS-concurrent-abortable-preclean: 0.106/5.011 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2143.341: [GC[YG occupancy: 32623 K (118016 K)]2143.341: [Rescan
> (parallel) , 0.0038660 secs]2143.345: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 56676K(158108K), 0.0039760 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 2143.345: [CMS-concurrent-sweep-start]
> 2143.349: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2143.349: [CMS-concurrent-reset-start]
> 2143.358: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2145.358: [GC [1 CMS-initial-mark: 24053K(40092K)] 56805K(158108K),
> 0.0043390 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2145.362: [CMS-concurrent-mark-start]
> 2145.379: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2145.379: [CMS-concurrent-preclean-start]
> 2145.379: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2145.379: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2150.393:
> [CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2150.393: [GC[YG occupancy: 33073 K (118016 K)]2150.393: [Rescan
> (parallel) , 0.0038190 secs]2150.397: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 57126K(158108K), 0.0039210 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 2150.397: [CMS-concurrent-sweep-start]
> 2150.400: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2150.400: [CMS-concurrent-reset-start]
> 2150.410: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2152.410: [GC [1 CMS-initial-mark: 24053K(40092K)] 57254K(158108K),
> 0.0044080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2152.415: [CMS-concurrent-mark-start]
> 2152.431: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2152.431: [CMS-concurrent-preclean-start]
> 2152.432: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2152.432: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2157.447:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.01, real=5.02 secs]
> 2157.447: [GC[YG occupancy: 33522 K (118016 K)]2157.447: [Rescan
> (parallel) , 0.0038130 secs]2157.451: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 57575K(158108K), 0.0039160 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2157.451: [CMS-concurrent-sweep-start]
> 2157.454: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2157.454: [CMS-concurrent-reset-start]
> 2157.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 2159.464: [GC [1 CMS-initial-mark: 24053K(40092K)] 57707K(158108K),
> 0.0045170 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2159.469: [CMS-concurrent-mark-start]
> 2159.483: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 2159.483: [CMS-concurrent-preclean-start]
> 2159.483: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2159.483: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2164.491:
> [CMS-concurrent-abortable-preclean: 0.111/5.007 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 2164.491: [GC[YG occupancy: 34293 K (118016 K)]2164.491: [Rescan
> (parallel) , 0.0052070 secs]2164.496: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 58347K(158108K), 0.0053130 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 2164.496: [CMS-concurrent-sweep-start]
> 2164.500: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2164.500: [CMS-concurrent-reset-start]
> 2164.509: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.01, real=0.01 secs]
> 2166.509: [GC [1 CMS-initial-mark: 24053K(40092K)] 58475K(158108K),
> 0.0045900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2166.514: [CMS-concurrent-mark-start]
> 2166.533: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2166.533: [CMS-concurrent-preclean-start]
> 2166.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2166.533: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2171.549:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 2171.549: [GC[YG occupancy: 34743 K (118016 K)]2171.549: [Rescan
> (parallel) , 0.0052200 secs]2171.554: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 58796K(158108K), 0.0053210 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 2171.554: [CMS-concurrent-sweep-start]
> 2171.558: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2171.558: [CMS-concurrent-reset-start]
> 2171.567: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2173.567: [GC [1 CMS-initial-mark: 24053K(40092K)] 58924K(158108K),
> 0.0046700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2173.572: [CMS-concurrent-mark-start]
> 2173.588: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2173.588: [CMS-concurrent-preclean-start]
> 2173.589: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2173.589: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2178.604:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.10 sys=0.01, real=5.02 secs]
> 2178.605: [GC[YG occupancy: 35192 K (118016 K)]2178.605: [Rescan
> (parallel) , 0.0041460 secs]2178.609: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 59245K(158108K), 0.0042450 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2178.609: [CMS-concurrent-sweep-start]
> 2178.612: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 2178.612: [CMS-concurrent-reset-start]
> 2178.622: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2180.622: [GC [1 CMS-initial-mark: 24053K(40092K)] 59373K(158108K),
> 0.0047200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 2180.627: [CMS-concurrent-mark-start]
> 2180.645: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
> sys=0.00, real=0.02 secs]
> 2180.645: [CMS-concurrent-preclean-start]
> 2180.645: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2180.645: [CMS-concurrent-abortable-preclean-start]
>   CMS: abort preclean due to time 2185.661:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2185.661: [GC[YG occupancy: 35645 K (118016 K)]2185.661: [Rescan
> (parallel) , 0.0050730 secs]2185.666: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 59698K(158108K), 0.0051720 secs]
> [Times: user=0.04 sys=0.01, real=0.01 secs]
> 2185.666: [CMS-concurrent-sweep-start]
> 2185.670: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2185.670: [CMS-concurrent-reset-start]
> 2185.679: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2187.679: [GC [1 CMS-initial-mark: 24053K(40092K)] 59826K(158108K),
> 0.0047350 secs]
>
> --
> gregross:)
>
> 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
> CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION
>
> http://www.uci.cu
> http://www.facebook.com/universidad.uci
> http://www.flickr.com/photos/universidad_uci

-- 
Marcos Ortiz Valmaseda,
Data Engineer && Senior System Administrator at UCI
Blog: http://marcosluis2186.posterous.com
Linkedin: http://www.linkedin.com/in/marcosluis2186
Twitter: @marcosluis2186





10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS INFORMATICAS...
CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION

http://www.uci.cu
http://www.facebook.com/universidad.uci
http://www.flickr.com/photos/universidad_uci

Re: long garbage collecting pause

Posted by Greg Ross <gr...@ngmoco.com>.
I'll look into the extra options, then.

Thanks for the info.

Greg


On Mon, Oct 1, 2012 at 2:27 PM, Michael Segel <mi...@hotmail.com> wrote:
> There's more to it... like setting up the Par New stuff.
>
> I think it should be detailed in the tuning section.
>
>
> On Oct 1, 2012, at 4:05 PM, Greg Ross <gr...@ngmoco.com> wrote:
>
>> Thank, Michael.
>>
>> We have hbase.hregion.memstore.mslab.enabled = true but have left the
>> chunksize and max.allocation not set so I assume these are at their
>> default values.
>>
>> Greg
>>
>>
>> On Mon, Oct 1, 2012 at 1:51 PM, Michael Segel <mi...@hotmail.com> wrote:
>>> Have you implemented MSLABS?
>>>
>>> On Oct 1, 2012, at 3:35 PM, Greg Ross <gr...@ngmoco.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm having difficulty with a mapreduce job that has reducers that read
>>>> from and write to HBase, version 0.92.1, r1298924. Row sizes vary
>>>> greatly. As do the number of cells, although the number of cells is
>>>> typically numbered in the tens, at most. The max cell size is 1MB.
>>>>
>>>> I see the following in the logs followed by the region server promptly
>>>> shutting down:
>>>>
>>>> 2012-10-01 19:08:47,858 [regionserver60020] WARN
>>>> org.apache.hadoop.hbase.util.Sleeper: We slept 28970ms instead of
>>>> 3000ms, this is likely due to a long garbage collecting pause and it's
>>>> usually bad, see
>>>> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
>>>>
>>>> The full logs, including GC are below.
>>>>
>>>> Although new to HBase, I've read up on the likely GC issues and their
>>>> remedies. I've implemented the recommended solutions and still to no
>>>> avail.
>>>>
>>>> Here's what I've tried:
>>>>
>>>> (1) increased the RAM to 4G
>>>> (2) set -XX:+UseConcMarkSweepGC
>>>> (3) set -XX:+UseParNewGC
>>>> (4) set -XX:CMSInitiatingOccupancyFraction=N where I've attempted N=[40..70]
>>>> (5) I've called context.progress() in the reducer before and after
>>>> reading and writing
>>>> (6) memstore is enabled
>>>>
>>>> Is there anything else that I might have missed?
>>>>
>>>> Thanks,
>>>>
>>>> Greg
>>>>
>>>>
>>>> hbase logs
>>>> ========
>>>>
>>>> 2012-10-01 19:09:48,293
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/.tmp/d2ee47650b224189b0c27d1c20929c03
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> 2012-10-01 19:09:48,884
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 5 file(s) in U of
>>>> orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
>>>> into d2ee47650b224189b0c27d1c20929c03, size=723.0m; total size for
>>>> store is 723.0m
>>>> 2012-10-01 19:09:48,884
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.,
>>>> storeName=U, fileCount=5, fileSize=1.4g, priority=2,
>>>> time=10631266687564968; duration=35sec
>>>> 2012-10-01 19:09:48,886
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>>>> 2012-10-01 19:09:48,887
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 5
>>>> file(s) in U of
>>>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp,
>>>> seqid=132201184, totalSize=1.4g
>>>> 2012-10-01 19:10:04,191
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp/2e5534fea8b24eaf9cc1e05dea788c01
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> 2012-10-01 19:10:04,868
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 5 file(s) in U of
>>>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>>>> into 2e5534fea8b24eaf9cc1e05dea788c01, size=626.5m; total size for
>>>> store is 626.5m
>>>> 2012-10-01 19:10:04,868
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
>>>> storeName=U, fileCount=5, fileSize=1.4g, priority=2,
>>>> time=10631266696614208; duration=15sec
>>>> 2012-10-01 19:14:04,992
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>>>> 2012-10-01 19:14:04,993
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp,
>>>> seqid=132198830, totalSize=863.8m
>>>> 2012-10-01 19:14:19,147
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp/b741f8501ad248418c48262d751f6e86
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/U/b741f8501ad248418c48262d751f6e86
>>>> 2012-10-01 19:14:19,381
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>>>> into b741f8501ad248418c48262d751f6e86, size=851.4m; total size for
>>>> store is 851.4m
>>>> 2012-10-01 19:14:19,381
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.,
>>>> storeName=U, fileCount=2, fileSize=863.8m, priority=5,
>>>> time=10631557965747111; duration=14sec
>>>> 2012-10-01 19:14:19,381
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>>>> 2012-10-01 19:14:19,381
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>>> file(s) in U of
>>>> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp,
>>>> seqid=132198819, totalSize=496.7m
>>>> 2012-10-01 19:14:27,337
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp/78040c736c4149a884a1bdcda9916416
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/U/78040c736c4149a884a1bdcda9916416
>>>> 2012-10-01 19:14:27,514
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 3 file(s) in U of
>>>> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>>>> into 78040c736c4149a884a1bdcda9916416, size=487.5m; total size for
>>>> store is 487.5m
>>>> 2012-10-01 19:14:27,514
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.,
>>>> storeName=U, fileCount=3, fileSize=496.7m, priority=4,
>>>> time=10631557966599560; duration=8sec
>>>> 2012-10-01 19:14:27,514
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>>>> 2012-10-01 19:14:27,514
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>>> file(s) in U of
>>>> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp,
>>>> seqid=132200816, totalSize=521.7m
>>>> 2012-10-01 19:14:36,962
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp/0142b8bcdda948c185887358990af6d1
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/U/0142b8bcdda948c185887358990af6d1
>>>> 2012-10-01 19:14:37,171
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 3 file(s) in U of
>>>> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>>>> into 0142b8bcdda948c185887358990af6d1, size=510.7m; total size for
>>>> store is 510.7m
>>>> 2012-10-01 19:14:37,171
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.,
>>>> storeName=U, fileCount=3, fileSize=521.7m, priority=4,
>>>> time=10631557967125617; duration=9sec
>>>> 2012-10-01 19:14:37,172
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>>>> 2012-10-01 19:14:37,172
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>>> file(s) in U of
>>>> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp,
>>>> seqid=132198832, totalSize=565.5m
>>>> 2012-10-01 19:14:57,082
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp/44a27dce8df04306908579c22be76786
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/U/44a27dce8df04306908579c22be76786
>>>> 2012-10-01 19:14:57,429
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 3 file(s) in U of
>>>> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>>>> into 44a27dce8df04306908579c22be76786, size=557.7m; total size for
>>>> store is 557.7m
>>>> 2012-10-01 19:14:57,429
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.,
>>>> storeName=U, fileCount=3, fileSize=565.5m, priority=4,
>>>> time=10631557967207683; duration=20sec
>>>> 2012-10-01 19:14:57,429
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>>>> 2012-10-01 19:14:57,430
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>>> file(s) in U of
>>>> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp,
>>>> seqid=132199414, totalSize=845.6m
>>>> 2012-10-01 19:16:54,394
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp/771813ba0c87449ebd99d5e7916244f8
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/U/771813ba0c87449ebd99d5e7916244f8
>>>> 2012-10-01 19:16:54,636
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 3 file(s) in U of
>>>> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>>>> into 771813ba0c87449ebd99d5e7916244f8, size=827.3m; total size for
>>>> store is 827.3m
>>>> 2012-10-01 19:16:54,636
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.,
>>>> storeName=U, fileCount=3, fileSize=845.6m, priority=4,
>>>> time=10631557967560440; duration=1mins, 57sec
>>>> 2012-10-01 19:16:54,636
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>>>> 2012-10-01 19:16:54,637
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>>> file(s) in U of
>>>> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp,
>>>> seqid=132198824, totalSize=1012.4m
>>>> 2012-10-01 19:17:35,610
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp/771a4124c763468c8dea927cb53887ee
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/U/771a4124c763468c8dea927cb53887ee
>>>> 2012-10-01 19:17:35,874
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 3 file(s) in U of
>>>> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>>>> into 771a4124c763468c8dea927cb53887ee, size=974.0m; total size for
>>>> store is 974.0m
>>>> 2012-10-01 19:17:35,875
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.,
>>>> storeName=U, fileCount=3, fileSize=1012.4m, priority=4,
>>>> time=10631557967678796; duration=41sec
>>>> 2012-10-01 19:17:35,875
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>>>> 2012-10-01 19:17:35,875
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>>> file(s) in U of
>>>> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp,
>>>> seqid=132198815, totalSize=530.5m
>>>> 2012-10-01 19:17:47,481
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp/24328f8244f747bf8ba81b74ef2893fa
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/U/24328f8244f747bf8ba81b74ef2893fa
>>>> 2012-10-01 19:17:47,741
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 3 file(s) in U of
>>>> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>>>> into 24328f8244f747bf8ba81b74ef2893fa, size=524.0m; total size for
>>>> store is 524.0m
>>>> 2012-10-01 19:17:47,741
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.,
>>>> storeName=U, fileCount=3, fileSize=530.5m, priority=4,
>>>> time=10631557967807915; duration=11sec
>>>> 2012-10-01 19:17:47,741
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>>>> 2012-10-01 19:17:47,741
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>>> file(s) in U of
>>>> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp,
>>>> seqid=132201190, totalSize=529.3m
>>>> 2012-10-01 19:17:58,031
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp/cae48d1b96eb4440a7bcd5fa3b4c070b
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/U/cae48d1b96eb4440a7bcd5fa3b4c070b
>>>> 2012-10-01 19:17:58,232
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 3 file(s) in U of
>>>> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>>>> into cae48d1b96eb4440a7bcd5fa3b4c070b, size=521.3m; total size for
>>>> store is 521.3m
>>>> 2012-10-01 19:17:58,232
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.,
>>>> storeName=U, fileCount=3, fileSize=529.3m, priority=4,
>>>> time=10631557967959079; duration=10sec
>>>> 2012-10-01 19:17:58,232
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>>>> 2012-10-01 19:17:58,232
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>>> file(s) in U of
>>>> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp,
>>>> seqid=132199205, totalSize=475.2m
>>>> 2012-10-01 19:18:06,764
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp/ba51afdc860048b6b2e1047b06fb3b29
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/U/ba51afdc860048b6b2e1047b06fb3b29
>>>> 2012-10-01 19:18:07,065
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 3 file(s) in U of
>>>> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>>>> into ba51afdc860048b6b2e1047b06fb3b29, size=474.5m; total size for
>>>> store is 474.5m
>>>> 2012-10-01 19:18:07,065
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.,
>>>> storeName=U, fileCount=3, fileSize=475.2m, priority=4,
>>>> time=10631557968104570; duration=8sec
>>>> 2012-10-01 19:18:07,065
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>>>> 2012-10-01 19:18:07,065
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp,
>>>> seqid=132198822, totalSize=522.5m
>>>> 2012-10-01 19:18:18,306
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp/7a0bd16b11f34887b2690e9510071bf0
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/U/7a0bd16b11f34887b2690e9510071bf0
>>>> 2012-10-01 19:18:18,439
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>>>> into 7a0bd16b11f34887b2690e9510071bf0, size=520.0m; total size for
>>>> store is 520.0m
>>>> 2012-10-01 19:18:18,440
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.,
>>>> storeName=U, fileCount=2, fileSize=522.5m, priority=5,
>>>> time=10631557965863914; duration=11sec
>>>> 2012-10-01 19:18:18,440
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>>>> 2012-10-01 19:18:18,440
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp,
>>>> seqid=132198823, totalSize=548.0m
>>>> 2012-10-01 19:18:32,288
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp/dcd050acc2e747738a90aebaae8920e4
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/U/dcd050acc2e747738a90aebaae8920e4
>>>> 2012-10-01 19:18:32,431
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>>>> into dcd050acc2e747738a90aebaae8920e4, size=528.2m; total size for
>>>> store is 528.2m
>>>> 2012-10-01 19:18:32,431
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.,
>>>> storeName=U, fileCount=2, fileSize=548.0m, priority=5,
>>>> time=10631557966071838; duration=13sec
>>>> 2012-10-01 19:18:32,431
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>>>> 2012-10-01 19:18:32,431
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp,
>>>> seqid=132199001, totalSize=475.9m
>>>> 2012-10-01 19:18:43,154
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp/15a9167cd9754fd4b3674fe732648a03
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/U/15a9167cd9754fd4b3674fe732648a03
>>>> 2012-10-01 19:18:43,322
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>>>> into 15a9167cd9754fd4b3674fe732648a03, size=475.9m; total size for
>>>> store is 475.9m
>>>> 2012-10-01 19:18:43,322
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.,
>>>> storeName=U, fileCount=2, fileSize=475.9m, priority=5,
>>>> time=10631557966273447; duration=10sec
>>>> 2012-10-01 19:18:43,322
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>>>> 2012-10-01 19:18:43,322
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp,
>>>> seqid=132198833, totalSize=824.8m
>>>> 2012-10-01 19:19:00,252
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp/bf8da91da0824a909f684c3ecd0ee8da
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/U/bf8da91da0824a909f684c3ecd0ee8da
>>>> 2012-10-01 19:19:00,788
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>>>> into bf8da91da0824a909f684c3ecd0ee8da, size=803.0m; total size for
>>>> store is 803.0m
>>>> 2012-10-01 19:19:00,788
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.,
>>>> storeName=U, fileCount=2, fileSize=824.8m, priority=5,
>>>> time=10631557966382580; duration=17sec
>>>> 2012-10-01 19:19:00,788
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>>>> 2012-10-01 19:19:00,788
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp,
>>>> seqid=132198810, totalSize=565.3m
>>>> 2012-10-01 19:19:11,311
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp/5cd2032f48bc4287b8866165dcb6d3e6
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/U/5cd2032f48bc4287b8866165dcb6d3e6
>>>> 2012-10-01 19:19:11,504
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>>>> into 5cd2032f48bc4287b8866165dcb6d3e6, size=553.5m; total size for
>>>> store is 553.5m
>>>> 2012-10-01 19:19:11,504
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.,
>>>> storeName=U, fileCount=2, fileSize=565.3m, priority=5,
>>>> time=10631557966480961; duration=10sec
>>>> 2012-10-01 19:19:11,504
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>>>> 2012-10-01 19:19:11,504
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp,
>>>> seqid=132198825, totalSize=519.6m
>>>> 2012-10-01 19:19:22,186
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp/6f29b3b15f1747c196ac9aa79f4835b1
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/U/6f29b3b15f1747c196ac9aa79f4835b1
>>>> 2012-10-01 19:19:22,437
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>>>> into 6f29b3b15f1747c196ac9aa79f4835b1, size=512.7m; total size for
>>>> store is 512.7m
>>>> 2012-10-01 19:19:22,437
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.,
>>>> storeName=U, fileCount=2, fileSize=519.6m, priority=5,
>>>> time=10631557966769107; duration=10sec
>>>> 2012-10-01 19:19:22,437
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>>>> 2012-10-01 19:19:22,437
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp,
>>>> seqid=132198836, totalSize=528.3m
>>>> 2012-10-01 19:19:34,752
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp/d836630f7e2b4212848d7e4edc7238f1
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/U/d836630f7e2b4212848d7e4edc7238f1
>>>> 2012-10-01 19:19:34,945
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>>>> into d836630f7e2b4212848d7e4edc7238f1, size=504.3m; total size for
>>>> store is 504.3m
>>>> 2012-10-01 19:19:34,945
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.,
>>>> storeName=U, fileCount=2, fileSize=528.3m, priority=5,
>>>> time=10631557967026388; duration=12sec
>>>> 2012-10-01 19:19:34,945
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>>>> 2012-10-01 19:19:34,945
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp,
>>>> seqid=132198841, totalSize=813.8m
>>>> 2012-10-01 19:19:49,303
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp/c70692c971cd4e899957f9d5b189372e
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/U/c70692c971cd4e899957f9d5b189372e
>>>> 2012-10-01 19:19:49,428
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>>>> into c70692c971cd4e899957f9d5b189372e, size=813.7m; total size for
>>>> store is 813.7m
>>>> 2012-10-01 19:19:49,428
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.,
>>>> storeName=U, fileCount=2, fileSize=813.8m, priority=5,
>>>> time=10631557967436197; duration=14sec
>>>> 2012-10-01 19:19:49,428
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>>>> 2012-10-01 19:19:49,429
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp,
>>>> seqid=132198642, totalSize=812.0m
>>>> 2012-10-01 19:20:38,718
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp/bf99f97891ed42f7847a11cfb8f46438
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/U/bf99f97891ed42f7847a11cfb8f46438
>>>> 2012-10-01 19:20:38,825
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>>>> into bf99f97891ed42f7847a11cfb8f46438, size=811.3m; total size for
>>>> store is 811.3m
>>>> 2012-10-01 19:20:38,825
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.,
>>>> storeName=U, fileCount=2, fileSize=812.0m, priority=5,
>>>> time=10631557968183922; duration=49sec
>>>> 2012-10-01 19:20:38,826
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>>>> 2012-10-01 19:20:38,826
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp,
>>>> seqid=132198138, totalSize=588.7m
>>>> 2012-10-01 19:20:48,274
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp/9f44b7eeab58407ca98bb4ec90126035
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/U/9f44b7eeab58407ca98bb4ec90126035
>>>> 2012-10-01 19:20:48,383
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>>>> into 9f44b7eeab58407ca98bb4ec90126035, size=573.4m; total size for
>>>> store is 573.4m
>>>> 2012-10-01 19:20:48,383
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.,
>>>> storeName=U, fileCount=2, fileSize=588.7m, priority=5,
>>>> time=10631557968302831; duration=9sec
>>>> 2012-10-01 19:20:48,383
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>>>> 2012-10-01 19:20:48,383
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp,
>>>> seqid=132198644, totalSize=870.8m
>>>> 2012-10-01 19:21:04,998
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp/920844c25b1847c6ac4b880e8cf1d5b0
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/U/920844c25b1847c6ac4b880e8cf1d5b0
>>>> 2012-10-01 19:21:05,107
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>>>> into 920844c25b1847c6ac4b880e8cf1d5b0, size=869.0m; total size for
>>>> store is 869.0m
>>>> 2012-10-01 19:21:05,107
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.,
>>>> storeName=U, fileCount=2, fileSize=870.8m, priority=5,
>>>> time=10631557968521590; duration=16sec
>>>> 2012-10-01 19:21:05,107
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>>>> 2012-10-01 19:21:05,107
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp,
>>>> seqid=132198622, totalSize=885.3m
>>>> 2012-10-01 19:21:27,231
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp/c85d413975d642fc914253bd08f3484f
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/U/c85d413975d642fc914253bd08f3484f
>>>> 2012-10-01 19:21:27,791
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>>>> into c85d413975d642fc914253bd08f3484f, size=848.3m; total size for
>>>> store is 848.3m
>>>> 2012-10-01 19:21:27,791
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.,
>>>> storeName=U, fileCount=2, fileSize=885.3m, priority=5,
>>>> time=10631557968628383; duration=22sec
>>>> 2012-10-01 19:21:27,791
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>>> in region orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>>>> 2012-10-01 19:21:27,791
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>>> file(s) in U of
>>>> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp,
>>>> seqid=132198621, totalSize=796.5m
>>>> 2012-10-01 19:21:42,374
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp/ce543c630dd142309af6dca2a9ab5786
>>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/U/ce543c630dd142309af6dca2a9ab5786
>>>> 2012-10-01 19:21:42,515
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>>> of 2 file(s) in U of
>>>> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>>>> into ce543c630dd142309af6dca2a9ab5786, size=795.5m; total size for
>>>> store is 795.5m
>>>> 2012-10-01 19:21:42,516
>>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>>> completed compaction:
>>>> regionName=orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.,
>>>> storeName=U, fileCount=2, fileSize=796.5m, priority=5,
>>>> time=10631557968713853; duration=14sec
>>>> 2012-10-01 19:49:58,159 [ResponseProcessor for block
>>>> blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor
>>>> exception  for block
>>>> blk_5535637699691880681_51616301java.io.EOFException
>>>>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>   at java.io.DataInputStream.readLong(DataInputStream.java:399)
>>>>   at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:120)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2634)
>>>>
>>>> 2012-10-01 19:49:58,167 [IPC Server handler 87 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
>>>> {"processingtimems":46208,"client":"10.100.102.155:38534","timeRange":[0,9223372036854775807],"starttimems":1349120951956,"responsesize":329939,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00322994","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
>>>> 2012-10-01 19:49:58,160
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
>>>> not heard from server in 56633ms for sessionid 0x137ec64368509f7,
>>>> closing socket connection and attempting reconnect
>>>> 2012-10-01 19:49:58,160 [regionserver60020] WARN
>>>> org.apache.hadoop.hbase.util.Sleeper: We slept 49116ms instead of
>>>> 3000ms, this is likely due to a long garbage collecting pause and it's
>>>> usually bad, see
>>>> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
>>>> 2012-10-01 19:49:58,160
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
>>>> not heard from server in 53359ms for sessionid 0x137ec64368509f6,
>>>> closing socket connection and attempting reconnect
>>>> 2012-10-01 19:49:58,320 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 waiting for responder to exit.
>>>> 2012-10-01 19:49:58,380 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>>> 2012-10-01 19:49:58,380 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>>> 10.100.101.156:50010
>>>> 2012-10-01 19:49:59,113 [regionserver60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: Unhandled
>>>> exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
>>>> rejected; currently processing
>>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>>>> org.apache.hadoop.hbase.YouAreDeadException:
>>>> org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
>>>> currently processing
>>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>>>>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>>   at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>>   at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
>>>>   at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:797)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:688)
>>>>   at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
>>>> currently processing
>>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>>>>   at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:222)
>>>>   at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:148)
>>>>   at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:844)
>>>>   at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:918)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>>   at $Proxy8.regionServerReport(Unknown Source)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:794)
>>>>   ... 2 more
>>>> 2012-10-01 19:49:59,114 [regionserver60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>>> abort: loaded coprocessors are: []
>>>> 2012-10-01 19:49:59,397 [IPC Server handler 36 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
>>>> {"processingtimems":47521,"client":"10.100.102.176:60221","timeRange":[0,9223372036854775807],"starttimems":1349120951875,"responsesize":699312,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00318223","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
>>>> 2012-10-01 19:50:00,355 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
>>>> primary datanode 10.100.102.122:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:00,355
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
>>>> server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
>>>> 2012-10-01 19:50:00,356
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>>>> SASL-authenticate because the default JAAS configuration section
>>>> 'Client' could not be found. If you are not using SASL, you may ignore
>>>> this. On the other hand, if you expected SASL to work, please fix your
>>>> JAAS configuration.
>>>> 2012-10-01 19:50:00,356 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.122:50010 failed 1 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>>>> retry...
>>>> 2012-10-01 19:50:00,357
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>>>> session
>>>> 2012-10-01 19:50:00,358
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>>>> server; r-o mode will be unavailable
>>>> 2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
>>>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
>>>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
>>>> expired from ZooKeeper, aborting
>>>> org.apache.zookeeper.KeeperException$SessionExpiredException:
>>>> KeeperErrorCode = Session expired
>>>>   at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:374)
>>>>   at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:271)
>>>>   at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
>>>>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
>>>> 2012-10-01 19:50:00,359
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
>>>> service, session 0x137ec64368509f6 has expired, closing socket
>>>> connection
>>>> 2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>>> abort: loaded coprocessors are: []
>>>> 2012-10-01 19:50:00,367 [regionserver60020-EventThread] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
>>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>>> hdfsBlocksLocalityIndex=97
>>>> 2012-10-01 19:50:00,367 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
>>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>>> hdfsBlocksLocalityIndex=97
>>>> 2012-10-01 19:50:00,381
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
>>>> server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
>>>> 2012-10-01 19:50:00,401 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unhandled
>>>> exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
>>>> rejected; currently processing
>>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>>>> 2012-10-01 19:50:00,403
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>>>> SASL-authenticate because the default JAAS configuration section
>>>> 'Client' could not be found. If you are not using SASL, you may ignore
>>>> this. On the other hand, if you expected SASL to work, please fix your
>>>> JAAS configuration.
>>>> 2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
>>>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
>>>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
>>>> expired from ZooKeeper, aborting
>>>> 2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
>>>> org.apache.zookeeper.ClientCnxn: EventThread shut down
>>>> 2012-10-01 19:50:00,412 [regionserver60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020
>>>> 2012-10-01 19:50:00,413
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>>>> session
>>>> 2012-10-01 19:50:00,413 [IPC Server handler 9 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,413 [IPC Server handler 20 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,413 [IPC Server handler 2 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,413 [IPC Server handler 10 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,413 [IPC Server listener on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on
>>>> 60020
>>>> 2012-10-01 19:50:00,413 [IPC Server handler 12 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 21 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 13 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 19 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 22 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 11 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.SplitLogWorker: Sending interrupt
>>>> to stop the worker thread
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 6 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Stopping
>>>> infoServer
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 0 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 28 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 7 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,413 [IPC Server handler 15 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 5 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 48 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,413 [IPC Server handler 14 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,413 [IPC Server handler 18 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 37 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 47 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 50 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 45 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 36 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 43 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 42 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 38 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 8 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 40 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 34 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 4 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@5fa9b60a,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320394"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.117:56438: output error
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 61 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 1768076108943205533:java.io.InterruptedIOException:
>>>> Interruped while waiting for IO on channel
>>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59104
>>>> remote=/10.100.101.156:50010]. 59988 millis timeout left.
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1243)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020
>>>> caught: java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 31 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414
>>>> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
>>>> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
>>>> SplitLogWorker interrupted while waiting for task, exiting:
>>>> java.lang.InterruptedException
>>>> 2012-10-01 19:50:00,563
>>>> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
>>>> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
>>>> SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
>>>> exiting
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 1 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 3201413024070455305:java.io.InterruptedIOException:
>>>> Interruped while waiting for IO on channel
>>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59115
>>>> remote=/10.100.101.156:50010]. 59999 millis timeout left.
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>   at java.io.DataInputStream.readShort(DataInputStream.java:295)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,563 [IPC Server Responder] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 27 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,414
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>>>> server; r-o mode will be unavailable
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 55 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block -2144655386884254555:java.io.InterruptedIOException:
>>>> Interruped while waiting for IO on channel
>>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59108
>>>> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1350)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,649
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
>>>> service, session 0x137ec64368509f7 has expired, closing socket
>>>> connection
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 39 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.173:50010 for file
>>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
>>>> for block -2100467641393578191:java.io.InterruptedIOException:
>>>> Interruped while waiting for IO on channel
>>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:48825
>>>> remote=/10.100.102.173:50010]. 60000 millis timeout left.
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 26 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block -5183799322211896791:java.io.InterruptedIOException:
>>>> Interruped while waiting for IO on channel
>>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59078
>>>> remote=/10.100.101.156:50010]. 59949 millis timeout left.
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 85 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block -5183799322211896791:java.io.InterruptedIOException:
>>>> Interruped while waiting for IO on channel
>>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59082
>>>> remote=/10.100.101.156:50010]. 59950 millis timeout left.
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,414 [IPC Server handler 57 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block -1763662403960466408:java.io.InterruptedIOException:
>>>> Interruped while waiting for IO on channel
>>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59116
>>>> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>   at java.io.DataInputStream.readShort(DataInputStream.java:295)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,649 [IPC Server handler 79 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,649 [regionserver60020-EventThread] INFO
>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>>>> This client just lost it's session with ZooKeeper, trying to
>>>> reconnect.
>>>> 2012-10-01 19:50:00,649 [IPC Server handler 89 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,649 [PRI IPC Server handler 3 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,649 [PRI IPC Server handler 0 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,700 [IPC Server handler 56 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,649 [PRI IPC Server handler 2 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,701 [IPC Server handler 54 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,563 [IPC Server Responder] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
>>>> 2012-10-01 19:50:00,701 [IPC Server handler 71 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,701 [IPC Server handler 79 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.193:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,563 [IPC Server handler 16 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 9 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,563 [IPC Server handler 17 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,415 [IPC Server handler 60 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@7eee7b96,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321525"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.125:49043: output error
>>>> 2012-10-01 19:50:00,704 [IPC Server handler 3 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 6550563574061266649:java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,717 [IPC Server handler 49 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,717 [IPC Server handler 94 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,701 [IPC Server handler 83 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 1 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 7 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,701 [IPC Server handler 82 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 6 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 16 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.107:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,701 [IPC Server handler 74 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 86 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020
>>>> caught: java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 5 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,701 [regionserver60020] INFO org.mortbay.log:
>>>> Stopped SelectChannelConnector@0.0.0.0:60030
>>>> 2012-10-01 19:50:00,722 [IPC Server handler 35 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,722 [IPC Server handler 16 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.133:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,722 [IPC Server handler 98 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,701 [IPC Server handler 68 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,701 [IPC Server handler 64 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,673 [IPC Server handler 33 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 76 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,673 [regionserver60020-EventThread] INFO
>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>>>> Trying to reconnect to zookeeper
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 84 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 95 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 75 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 92 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 88 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 67 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 30 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 80 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 62 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,722 [IPC Server handler 52 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,722 [IPC Server handler 32 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,722 [IPC Server handler 97 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,722 [IPC Server handler 96 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,722 [IPC Server handler 93 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,722 [IPC Server handler 73 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,722 [IPC Server handler 17 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.47:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,722 [IPC Server handler 87 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,721 [IPC Server handler 81 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,721 [IPC Server handler 90 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,721 [IPC Server handler 59 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>>> for block -9081461281107361903:java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 65 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,721 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 8387547514055202675:java.nio.channels.ClosedChannelException
>>>>   at java.nio.channels.spi.AbstractSelectableChannel.configureBlocking(AbstractSelectableChannel.java:252)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.<init>(SocketIOWithTimeout.java:66)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.<init>(SocketInputStream.java:50)
>>>>   at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:73)
>>>>   at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:91)
>>>>   at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:323)
>>>>   at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:299)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1474)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,721 [IPC Server handler 66 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 1768076108943205533:java.io.InterruptedIOException:
>>>> Interruped while waiting for IO on channel
>>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59074
>>>> remote=/10.100.101.156:50010]. 59947 millis timeout left.
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,811 [IPC Server handler 59 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.135:50010 for file
>>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>>> for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 58 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.io.InterruptedIOException:
>>>> Interruped while waiting for IO on channel
>>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59107
>>>> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,831 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.153:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 39 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.144:50010 for file
>>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
>>>> for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 26 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.138:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,852 [IPC Server handler 66 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.174:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 41 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>>> for block 5946486101046455013:java.io.InterruptedIOException:
>>>> Interruped while waiting for IO on channel
>>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59091
>>>> remote=/10.100.101.156:50010]. 59953 millis timeout left.
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>>>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 1 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.148:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 53 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 79 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.154:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 89 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.47:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,719 [IPC Server handler 46 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 4946845190538507957:java.io.InterruptedIOException:
>>>> Interruped while waiting for IO on channel
>>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59113
>>>> remote=/10.100.101.156:50010]. 59999 millis timeout left.
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>   at java.io.DataInputStream.readShort(DataInputStream.java:295)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,895 [IPC Server handler 26 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.139:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,701 [IPC Server handler 91 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,717 [IPC Server handler 3 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.114:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 6550563574061266649:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.134:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,717 [PRI IPC Server handler 4 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,717 [IPC Server handler 77 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,717 [PRI IPC Server handler 8 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,717 [IPC Server handler 99 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,717 [IPC Server handler 85 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.138:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,717 [IPC Server handler 51 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,717 [IPC Server handler 57 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.138:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,717 [IPC Server handler 55 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.180:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,717 [IPC Server handler 70 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,717 [IPC Server handler 61 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.174:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.173:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,705 [IPC Server handler 23 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,705 [IPC Server handler 24 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 2851854722247682142:java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,704 [IPC Server handler 25 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.97:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.144:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,904 [regionserver60020-EventThread] INFO
>>>> org.apache.zookeeper.ZooKeeper: Initiating client connection,
>>>> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
>>>> sessionTimeout=180000 watcher=hconnection
>>>> 2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.72:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_-2144655386884254555_51616216 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,904 [IPC Server handler 57 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.144:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,901 [IPC Server handler 85 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_-5183799322211896791_51616591 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_5937357897784147544_51616546 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,899 [IPC Server handler 3 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_6550563574061266649_51616152 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,896 [IPC Server handler 46 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_4946845190538507957_51616628 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,896 [IPC Server handler 41 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.133:50010 for file
>>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>>>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,896 [IPC Server handler 26 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_-5183799322211896791_51616591 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,896 [IPC Server handler 1 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.175:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,895 [IPC Server handler 66 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.97:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,894 [IPC Server handler 39 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.151:50010 for file
>>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
>>>> for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,894 [IPC Server handler 79 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_2209451090614340242_51616188 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,857 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.101:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,856 [IPC Server handler 58 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.134:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,839 [IPC Server handler 59 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.194:50010 for file
>>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>>> for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,811 [IPC Server handler 16 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_4946845190538507957_51616628 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,787 [IPC Server handler 90 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.134:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,780 [IPC Server handler 17 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.134:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 63 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 72 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,736 [IPC Server handler 78 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:00,906 [IPC Server handler 59 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_-9081461281107361903_51616031 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,906 [IPC Server handler 39 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_-2100467641393578191_51531005 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,906 [IPC Server handler 41 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.145:50010 for file
>>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>>>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,905 [IPC Server handler 57 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_-1763662403960466408_51616605 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.162:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_-1763662403960466408_51616605 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,904 [IPC Server handler 24 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.72:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_1768076108943205533_51616106 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:00,941 [regionserver60020-SendThread()] INFO
>>>> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
>>>> /10.100.102.197:2181
>>>> 2012-10-01 19:50:00,941 [regionserver60020-EventThread] INFO
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
>>>> of this process is 20776@data3024.ngpipes.milp.ngmoco.com
>>>> 2012-10-01 19:50:00,942
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>>>> SASL-authenticate because the default JAAS configuration section
>>>> 'Client' could not be found. If you are not using SASL, you may ignore
>>>> this. On the other hand, if you expected SASL to work, please fix your
>>>> JAAS configuration.
>>>> 2012-10-01 19:50:00,943
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>>>> session
>>>> 2012-10-01 19:50:00,962
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>>>> server; r-o mode will be unavailable
>>>> 2012-10-01 19:50:00,962
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
>>>> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
>>>> sessionid = 0x137ec64373dd4b3, negotiated timeout = 40000
>>>> 2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>>>> Reconnected successfully. This disconnect could have been caused by a
>>>> network partition or a long-running GC pause, either way it's
>>>> recommended that you verify your environment.
>>>> 2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
>>>> org.apache.zookeeper.ClientCnxn: EventThread shut down
>>>> 2012-10-01 19:50:01,018 [IPC Server handler 41 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>>>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,018 [IPC Server handler 24 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.133:50010 for file
>>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>>>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_5946486101046455013_51616031 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:01,020 [IPC Server handler 25 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.162:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,021 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,023 [IPC Server handler 90 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.47:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,023 [IPC Server handler 17 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.47:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,024 [IPC Server handler 66 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.174:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,024 [IPC Server handler 61 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@20c6e4bc,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321393"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.118:57165: output error
>>>> 2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:01,038 [IPC Server handler 58 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.134:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:01,038 [IPC Server handler 1 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.148:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.97:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.153:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_1768076108943205533_51616106 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.102.101:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,041 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.156:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,042 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.153:50010 for file
>>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,044 [IPC Server handler 1 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>>> /10.100.101.175:50010 for file
>>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>>
>>>> 2012-10-01 19:50:01,090 [IPC Server handler 29 on 60020] ERROR
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>>> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
>>>> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03,
>>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>>> [cacheCompressed=false],
>>>> firstKey=00321084/U:BAHAMUTIOS_1/1348883706322/Put,
>>>> lastKey=00324324/U:user/1348900694793/Put, avgKeyLen=31,
>>>> avgValueLen=125185, entries=6053, length=758129544,
>>>> cur=00321312/U:KINGDOMSQUESTSIPAD_2/1349024761759/Put/vlen=460950]
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Could not obtain block:
>>>> blk_8387547514055202675_51616042
>>>> file=/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   ... 17 more
>>>> 2012-10-01 19:50:01,091 [IPC Server handler 24 on 60020] ERROR
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>>>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>>> [cacheCompressed=false],
>>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>>> avgValueLen=89140, entries=7365, length=656954017,
>>>> cur=00318964/U:user/1349118541276/Put/vlen=311046]
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Could not obtain block:
>>>> blk_2851854722247682142_51616579
>>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   ... 14 more
>>>> 2012-10-01 19:50:01,091 [IPC Server handler 1 on 60020] ERROR
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>>>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>>> [cacheCompressed=false],
>>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>>> avgValueLen=89140, entries=7365, length=656954017,
>>>> cur=0032027/U:KINGDOMSQUESTS_10/1349118531396/Put/vlen=401149]
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Could not obtain block:
>>>> blk_3201413024070455305_51616611
>>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   ... 14 more
>>>> 2012-10-01 19:50:01,091 [IPC Server handler 25 on 60020] ERROR
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>>>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>>> [cacheCompressed=false],
>>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>>> avgValueLen=89140, entries=7365, length=656954017,
>>>> cur=00319173/U:TINYTOWERANDROID_3/1349024232716/Put/vlen=129419]
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Could not obtain block:
>>>> blk_2851854722247682142_51616579
>>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   ... 14 more
>>>> 2012-10-01 19:50:01,091 [IPC Server handler 90 on 60020] ERROR
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>>>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>>> [cacheCompressed=false],
>>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>>> avgValueLen=89140, entries=7365, length=656954017,
>>>> cur=00316914/U:PETCAT_2/1349118542022/Put/vlen=499140]
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Could not obtain block:
>>>> blk_5937357897784147544_51616546
>>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   ... 14 more
>>>> 2012-10-01 19:50:01,091 [IPC Server handler 17 on 60020] ERROR
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>>> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
>>>> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>>> [cacheCompressed=false],
>>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>>> avgValueLen=89140, entries=7365, length=656954017,
>>>> cur=00317054/U:BAHAMUTIOS_4/1348869430278/Put/vlen=104012]
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Could not obtain block:
>>>> blk_5937357897784147544_51616546
>>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   ... 17 more
>>>> 2012-10-01 19:50:01,091 [IPC Server handler 58 on 60020] ERROR
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>>>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>>> [cacheCompressed=false],
>>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>>> avgValueLen=89140, entries=7365, length=656954017,
>>>> cur=00316983/U:TINYTOWERANDROID_1/1349118439250/Put/vlen=417924]
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Could not obtain block:
>>>> blk_5937357897784147544_51616546
>>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>>   ... 14 more
>>>> 2012-10-01 19:50:01,091 [IPC Server handler 89 on 60020] ERROR
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>>> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
>>>> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>>> [cacheCompressed=false],
>>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>>> avgValueLen=89140, entries=7365, length=656954017,
>>>> cur=00317043/U:BAHAMUTANDROID_7/1348968079952/Put/vlen=419212]
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Could not obtain block:
>>>> blk_5937357897784147544_51616546
>>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>>   ... 17 more
>>>> 2012-10-01 19:50:01,094 [IPC Server handler 58 on 60020] WARN
>>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>>> server
>>>> java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> 2012-10-01 19:50:01,094 [IPC Server handler 17 on 60020] WARN
>>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>>> server
>>>> java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> 2012-10-01 19:50:01,093 [IPC Server handler 90 on 60020] WARN
>>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>>> server
>>>> java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> 2012-10-01 19:50:01,093 [IPC Server handler 25 on 60020] WARN
>>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>>> server
>>>> java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> 2012-10-01 19:50:01,092 [IPC Server handler 1 on 60020] WARN
>>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>>> server
>>>> java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> 2012-10-01 19:50:01,092 [IPC Server handler 24 on 60020] WARN
>>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>>> server
>>>> java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> 2012-10-01 19:50:01,091 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>>> server
>>>> java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> 2012-10-01 19:50:01,095 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>>> 2012-10-01 19:50:01,097 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>>> 10.100.101.156:50010
>>>> 2012-10-01 19:50:01,115 [IPC Server handler 39 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@2743ecf8,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00390925"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.122:51758: output error
>>>> 2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:01,151 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
>>>> primary datanode 10.100.102.122:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:01,151 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.122:50010 failed 2 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>>>> retry...
>>>> 2012-10-01 19:50:01,153 [IPC Server handler 89 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@7137feec,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317043"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.68:55302: output error
>>>> 2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:01,156 [IPC Server handler 66 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@6b9a9eba,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321504"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.176:32793: output error
>>>> 2012-10-01 19:50:01,157 [IPC Server handler 66 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:01,158 [IPC Server handler 66 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:01,159 [IPC Server handler 41 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@586761c,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00391525"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.155:39850: output error
>>>> 2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:01,216 [regionserver60020.compactionChecker] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer$CompactionChecker:
>>>> regionserver60020.compactionChecker exiting
>>>> 2012-10-01 19:50:01,216 [regionserver60020.logRoller] INFO
>>>> org.apache.hadoop.hbase.regionserver.LogRoller: LogRoller exiting.
>>>> 2012-10-01 19:50:01,216 [regionserver60020.cacheFlusher] INFO
>>>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
>>>> regionserver60020.cacheFlusher exiting
>>>> 2012-10-01 19:50:01,217 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: aborting server
>>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
>>>> 2012-10-01 19:50:01,218 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>>>> Closed zookeeper sessionid=0x137ec64373dd4b3
>>>> 2012-10-01 19:50:01,270
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,24294294,1349027918385.068e6f4f7b8a81fb21e49fe3ac47f262.
>>>> 2012-10-01 19:50:01,271
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96510144,1348960969795.fe2a133a17d09a65a6b0d4fb60e6e051.
>>>> 2012-10-01 19:50:01,272
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56499174,1349027424070.7f767ca333bef3dcdacc9a6c673a8350.
>>>> 2012-10-01 19:50:01,273
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96515494,1348960969795.8ab4e1d9f4e4c388f3f4f18eec637e8a.
>>>> 2012-10-01 19:50:01,273
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,98395724,1348969940123.08188cc246bf752c17cfe57f99970924.
>>>> 2012-10-01 19:50:01,274
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>>>> 2012-10-01 19:50:01,275
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56604984,1348940650040.14639a082062e98abfea8ae3fff5d2c7.
>>>> 2012-10-01 19:50:01,275
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56880144,1348969971950.ece85a086a310aacc2da259a3303e67e.
>>>> 2012-10-01 19:50:01,276
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>>>> 2012-10-01 19:50:01,277
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,31267284,1348961229728.fc429276c44f5c274f00168f12128bad.
>>>> 2012-10-01 19:50:01,278
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56569824,1348940809479.9808dac5b895fc9b8f9892c4b72b3804.
>>>> 2012-10-01 19:50:01,279
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56425354,1349031095620.e4965f2e57729ff9537986da3e19258c.
>>>> 2012-10-01 19:50:01,280
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96504305,1348964001164.77f75cf8ba76ebc4417d49f019317d0a.
>>>> 2012-10-01 19:50:01,280
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,60743825,1348962513777.f377f704db5f0d000e36003338e017b1.
>>>> 2012-10-01 19:50:01,283
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,09603014,1349026790546.d634bfe659bdf2f45ec89e53d2d38791.
>>>> 2012-10-01 19:50:01,283
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,31274021,1348961229728.e93382b458a84c22f2e5aeb9efa737b5.
>>>> 2012-10-01 19:50:01,285
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56462454,1348982699951.a2dafbd054bf65aa6f558dc9a2d839a1.
>>>> 2012-10-01 19:50:01,286
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> Orwell,48814673,1348270987327.29818ea19d62126d5616a7ba7d7dae21.
>>>> 2012-10-01 19:50:01,288
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56610954,1348940650040.3609c1bfc2be6936577b6be493e7e8d9.
>>>> 2012-10-01 19:50:01,289
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>>>> 2012-10-01 19:50:01,289
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,05205763,1348941089603.957ea0e428ba6ff21174ecdda96f9fdc.
>>>> 2012-10-01 19:50:01,289
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56349615,1348941138879.dfabbd25c59fd6c34a58d9eacf4c096f.
>>>> 2012-10-01 19:50:01,292
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56503505,1349027424070.129160a78f13c17cc9ea16ff3757cda9.
>>>> 2012-10-01 19:50:01,292
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,91248264,1348942310344.a93982b8f91f260814885bc0afb4fbb9.
>>>> 2012-10-01 19:50:01,293
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,98646724,1348980566403.a4f2a16d1278ad1246068646c4886502.
>>>> 2012-10-01 19:50:01,293
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56454594,1348982903997.7107c6a1b2117fb59f68210ce82f2cc9.
>>>> 2012-10-01 19:50:01,294
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56564144,1348940809479.636092bb3ec2615b115257080427d091.
>>>> 2012-10-01 19:50:01,295
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_user_events,06252594,1348582793143.499f0a0f4704afa873c83f141f5e0324.
>>>> 2012-10-01 19:50:01,296
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56617164,1348941287729.3992a80a6648ab62753b4998331dcfdf.
>>>> 2012-10-01 19:50:01,296
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,98390944,1348969940123.af160e450632411818fa8d01b2c2ed0b.
>>>> 2012-10-01 19:50:01,297
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56703743,1348941223663.5cc2fcb82080dbf14956466c31f1d27c.
>>>> 2012-10-01 19:50:01,297
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>>>> 2012-10-01 19:50:01,298
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56693584,1348942631318.f01b179c1fad1f18b97b37fc8f730898.
>>>> 2012-10-01 19:50:01,299
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_user_events,12140615,1348582250428.7822f7f5ceea852b04b586fdf34debff.
>>>> 2012-10-01 19:50:01,300
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>>>> 2012-10-01 19:50:01,300
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96420705,1348942597601.a063e06eb840ee49bb88474ee8e22160.
>>>> 2012-10-01 19:50:01,300
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>>>> 2012-10-01 19:50:01,300
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96432674,1348961425148.1a793cf2137b9599193a1e2d5d9749c5.
>>>> 2012-10-01 19:50:01,302
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>>>> 2012-10-01 19:50:01,303
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,44371574,1348961840615.00f5b4710a43f2ee75d324bebb054323.
>>>> 2012-10-01 19:50:01,304
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,562fc921,1348941189517.cff261c585416844113f232960c8d6b4.
>>>> 2012-10-01 19:50:01,304
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56323831,1348941216581.0b0f3bdb03ce9e4f58156a4143018e0e.
>>>> 2012-10-01 19:50:01,305
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56480194,1349028080664.03a7046ffcec7e1f19cdb2f9890a353e.
>>>> 2012-10-01 19:50:01,306
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56418294,1348940288044.c872be05981c047e8c1ee4765b92a74d.
>>>> 2012-10-01 19:50:01,306
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,53590305,1348940776419.4c98d7846622f2d8dad4e998dae81d2b.
>>>> 2012-10-01 19:50:01,307
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96445963,1348942353563.66a0f602720191bf21a1dfd12eec4a35.
>>>> 2012-10-01 19:50:01,307
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>>>> 2012-10-01 19:50:01,307
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56305294,1348941189517.20f67941294c259e2273d3e0b7ae5198.
>>>> 2012-10-01 19:50:01,308
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56516115,1348981132325.0f753cb87c1163d95d9d10077d6308db.
>>>> 2012-10-01 19:50:01,309
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56796924,1348941269761.843e0aee0b15d67b810c7b3fe5a2dda7.
>>>> 2012-10-01 19:50:01,309
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56440004,1348941150045.7033cb81a66e405d7bf45cd55ab010e3.
>>>> 2012-10-01 19:50:01,309
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56317864,1348941124299.0de45283aa626fc83b2c026e1dd8bfec.
>>>> 2012-10-01 19:50:01,310
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56809673,1348941834500.08244d4ed5f7fdf6d9ac9c73fbfd3947.
>>>> 2012-10-01 19:50:01,310
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56894864,1348970959541.fc19a6ffe18f29203369d32ad1b102ce.
>>>> 2012-10-01 19:50:01,311
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56382491,1348940876960.2392137bf0f4cb695c08c0fb22ce5294.
>>>> 2012-10-01 19:50:01,312
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,95128264,1349026585563.5dc569af8afe0a84006b80612c15007f.
>>>> 2012-10-01 19:50:01,312
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,5631146,1348941124299.b7c10be9855b5e8ba3a76852920627f9.
>>>> 2012-10-01 19:50:01,312
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56710424,1348940462668.a370c149c232ebf4427e070eb28079bc.
>>>> 2012-10-01 19:50:01,314 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Session: 0x137ec64373dd4b3 closed
>>>> 2012-10-01 19:50:01,314 [regionserver60020-EventThread] INFO
>>>> org.apache.zookeeper.ClientCnxn: EventThread shut down
>>>> 2012-10-01 19:50:01,314 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 78
>>>> regions to close
>>>> 2012-10-01 19:50:01,317
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96497834,1348964001164.0b12f37b74b2124ef9f27d1ef0ebb17a.
>>>> 2012-10-01 19:50:01,318
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56507574,1349027965795.79113c51d318a11286b39397ebbfdf04.
>>>> 2012-10-01 19:50:01,319
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,24297525,1349027918385.047533f3d801709a26c895a01dcc1a73.
>>>> 2012-10-01 19:50:01,320
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96439694,1348961425148.038e0e43a6e56760e4daae6f34bfc607.
>>>> 2012-10-01 19:50:01,320
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,82811715,1348904784424.88fae4279f9806bef745d90f7ad37241.
>>>> 2012-10-01 19:50:01,321
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56699434,1348941223663.ef3ccf0af60ee87450806b393f89cb6e.
>>>> 2012-10-01 19:50:01,321
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>>>> 2012-10-01 19:50:01,322
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>>>> 2012-10-01 19:50:01,322
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>>>> 2012-10-01 19:50:01,323
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56465563,1348982699951.f34a29c0c4fc32e753d12db996ccc995.
>>>> 2012-10-01 19:50:01,324
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56450734,1349027937173.c70110b3573a48299853117c4287c7be.
>>>> 2012-10-01 19:50:01,325
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56361984,1349029457686.6c8d6974741e59df971da91c7355de1c.
>>>> 2012-10-01 19:50:01,327
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56814705,1348962077056.69fd74167a3c5c2961e45d339b962ca9.
>>>> 2012-10-01 19:50:01,327
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,00389105,1348978080963.6463149a16179d4e44c19bb49e4b4a81.
>>>> 2012-10-01 19:50:01,329
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56558944,1348940893836.03bd1c0532949ec115ca8d5215dbb22f.
>>>> 2012-10-01 19:50:01,330 [IPC Server handler 59 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@112ba2bf,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00392783"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.135:34935: output error
>>>> 2012-10-01 19:50:01,330
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,5658955,1349027142822.e65d0c1f452cb41d47ad08560c653607.
>>>> 2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:01,331
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56402364,1349049689267.27b452f3bcce0815b7bf92370cbb51de.
>>>> 2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:01,332
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96426544,1348942597601.addf704f99dd1b2e07b3eff505e2c811.
>>>> 2012-10-01 19:50:01,333
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,60414161,1348962852909.c6b1b21f00bbeef8648c4b9b3d28b49a.
>>>> 2012-10-01 19:50:01,333
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56552794,1348940893836.5314886f88f6576e127757faa25cef7c.
>>>> 2012-10-01 19:50:01,335
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56910924,1348962040261.fdedae86206fc091a72dde52a3d0d0b4.
>>>> 2012-10-01 19:50:01,335
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56720084,1349029064698.ee5cb00ab358be0d2d36c59189da32f8.
>>>> 2012-10-01 19:50:01,336
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56624533,1348941287729.6121fce2c31d4754b4ad4e855d85b501.
>>>> 2012-10-01 19:50:01,336
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56899934,1348970959541.f34f01dd65e293cb6ab13de17ac91eec.
>>>> 2012-10-01 19:50:01,337
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>>>> 2012-10-01 19:50:01,337
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56405923,1349049689267.bb4be5396608abeff803400cdd2408f4.
>>>> 2012-10-01 19:50:01,338
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56364924,1349029457686.1e1c09b6eb734d8ad48ea0b4fa103381.
>>>> 2012-10-01 19:50:01,339
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56784073,1348961864297.f01eaf712e59a0bca989ced951caf4f1.
>>>> 2012-10-01 19:50:01,340
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56594534,1349027142822.8e67bb85f4906d579d4d278d55efce0b.
>>>> 2012-10-01 19:50:01,340
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>>>> 2012-10-01 19:50:01,340
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56491525,1349027928183.7bbfb4d39ef4332e17845001191a6ad4.
>>>> 2012-10-01 19:50:01,341
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,07123624,1348959804638.c114ec80c6693a284741e220da028736.
>>>> 2012-10-01 19:50:01,342
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>>>> 2012-10-01 19:50:01,342
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56546534,1348941049708.bde2614732f938db04fdd81ed6dbfcf2.
>>>> 2012-10-01 19:50:01,343
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,569054,1348962040261.a7942d7837cd57b68d156d2ce7e3bd5f.
>>>> 2012-10-01 19:50:01,343
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56576714,1348982931576.3dd5bf244fb116cf2b6f812fcc39ad2d.
>>>> 2012-10-01 19:50:01,344
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,5689007,1348963034009.c4b16ea4d8dbc66c301e67d8e58a7e48.
>>>> 2012-10-01 19:50:01,344
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56410784,1349027912141.6de7be1745c329cf9680ad15e9bde594.
>>>> 2012-10-01 19:50:01,345
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>>>> 2012-10-01 19:50:01,345
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96457954,1348964300132.674a03f0c9866968aabd70ab38a482c0.
>>>> 2012-10-01 19:50:01,346
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56483084,1349027988535.de732d7e63ea53331b80255f51fc1a86.
>>>> 2012-10-01 19:50:01,347
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56790484,1348941269761.5bcc58c48351de449cc17307ab4bf777.
>>>> 2012-10-01 19:50:01,348
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56458293,1348982903997.4f67e6f4949a2ef7f4903f78f54c474e.
>>>> 2012-10-01 19:50:01,348
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,95123235,1349026585563.a359eb4cb88d34a529804e50a5affa24.
>>>> 2012-10-01 19:50:01,349
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>>>> 2012-10-01 19:50:01,350
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56368484,1348941099873.cef2729093a0d7d72b71fac1b25c0a40.
>>>> 2012-10-01 19:50:01,350
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,17499894,1349026916228.630196a553f73069b9e568e6912ef0c5.
>>>> 2012-10-01 19:50:01,351
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56375315,1348940876960.40cf6dfa370ce7f1fc6c1a59ba2f2191.
>>>> 2012-10-01 19:50:01,351
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,95512574,1349009451986.e4d292eb66d16c21ef8ae32254334850.
>>>> 2012-10-01 19:50:01,352
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>>>> 2012-10-01 19:50:01,352
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>>>> 2012-10-01 19:50:01,353
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56432705,1348941150045.07aa626f3703c7b4deaba1263c71894d.
>>>> 2012-10-01 19:50:01,353
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,13118725,1349026772953.c0be859d4a4dc2246d764a8aad58fe88.
>>>> 2012-10-01 19:50:01,354
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56520814,1348981132325.c2f16fd16f83aa51769abedfe8968bb6.
>>>> 2012-10-01 19:50:01,354
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>>>> 2012-10-01 19:50:01,355
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56884434,1348963034009.616835869c81659a27eab896f48ae4e1.
>>>> 2012-10-01 19:50:01,355
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56476541,1349028080664.341392a325646f24a3d8b8cd27ebda19.
>>>> 2012-10-01 19:50:01,357
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56803462,1348941834500.6313b36f1949381d01df977a182e6140.
>>>> 2012-10-01 19:50:01,357
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96464524,1348964300132.7a15f1e8e28f713212c516777267c2bf.
>>>> 2012-10-01 19:50:01,358
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56875074,1348969971950.3e408e7cb32c9213d184e10bf42837ad.
>>>> 2012-10-01 19:50:01,359
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,42862354,1348981565262.7ad46818060be413140cdcc11312119d.
>>>> 2012-10-01 19:50:01,359
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56582264,1349028973106.b481b61be387a041a3f259069d5013a6.
>>>> 2012-10-01 19:50:01,360
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56579105,1348982931576.1561a22c16263dccb8be07c654b43f2f.
>>>> 2012-10-01 19:50:01,360
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56723415,1348946404223.38d992d687ad8925810be4220a732b13.
>>>> 2012-10-01 19:50:01,361
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,4285921,1348981565262.7a2cbd8452b9e406eaf1a5ebff64855a.
>>>> 2012-10-01 19:50:01,362
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56336394,1348941231573.ca52393a2eabae00a64f65c0b657b95a.
>>>> 2012-10-01 19:50:01,363
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,96452715,1348942353563.876edfc6e978879aac42bfc905a09c26.
>>>> 2012-10-01 19:50:01,363
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>>>> 2012-10-01 19:50:01,364
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56525625,1348941298909.ccf16ed8e761765d2989343c7670e94f.
>>>> 2012-10-01 19:50:01,365
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,97578484,1348938848996.98ecacc61ae4c5b3f7a3de64bec0e026.
>>>> 2012-10-01 19:50:01,365
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56779025,1348961864297.cc13f0a6f5e632508f2e28a174ef1488.
>>>> 2012-10-01 19:50:01,366
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>>>> 2012-10-01 19:50:01,366
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_user_events,43323443,1348591057882.8b0ab02c33f275114d89088345f58885.
>>>> 2012-10-01 19:50:01,367
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>>>> 2012-10-01 19:50:01,367
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,56686234,1348942631318.69270cd5013f8ca984424e508878e428.
>>>> 2012-10-01 19:50:01,368
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,98642625,1348980566403.2277d2ef1d53d40d41cd23846619a3f8.
>>>> 2012-10-01 19:50:01,524 [IPC Server handler 57 on 60020] INFO
>>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>>> blk_3201413024070455305_51616611 from any node: java.io.IOException:
>>>> No live nodes contain current block. Will get new block locations from
>>>> namenode and retry...
>>>> 2012-10-01 19:50:02,462 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 2
>>>> regions to close
>>>> 2012-10-01 19:50:02,462 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>>> 2012-10-01 19:50:02,462 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>>> 10.100.101.156:50010
>>>> 2012-10-01 19:50:02,495 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
>>>> primary datanode 10.100.102.122:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:02,496 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.122:50010 failed 3 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>>>> retry...
>>>> 2012-10-01 19:50:02,686 [IPC Server handler 46 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@504b62c6,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320404"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.172:53925: output error
>>>> 2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:02,809 [IPC Server handler 55 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@45f1c31e,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322424"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.178:35016: output error
>>>> 2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:03,496 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>>> 2012-10-01 19:50:03,496 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>>> 10.100.101.156:50010
>>>> 2012-10-01 19:50:03,510 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
>>>> primary datanode 10.100.102.122:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:03,510 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.122:50010 failed 4 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>>>> retry...
>>>> 2012-10-01 19:50:05,299 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>>> 2012-10-01 19:50:05,299 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>>> 10.100.101.156:50010
>>>> 2012-10-01 19:50:05,314 [IPC Server handler 3 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@472aa9fe,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321694"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.176:42371: output error
>>>> 2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:05,329 [IPC Server handler 16 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@42987a12,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320293"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.135:35132: output error
>>>> 2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:05,638 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
>>>> primary datanode 10.100.102.122:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:05,638 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.122:50010 failed 5 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>>>> retry...
>>>> 2012-10-01 19:50:05,641 [IPC Server handler 26 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@a9c09e8,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319505"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.183:60078: output error
>>>> 2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:05,664 [IPC Server handler 57 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@349d7b4,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319915"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.141:58290: output error
>>>> 2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:07,063 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>>> 2012-10-01 19:50:07,063 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>>> 10.100.101.156:50010
>>>> 2012-10-01 19:50:07,076 [IPC Server handler 23 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@5ba03734,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319654"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.161:43227: output error
>>>> 2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:07,089 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
>>>> primary datanode 10.100.102.122:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:07,090 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.122:50010 failed 6 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010.
>>>> Marking primary datanode as bad.
>>>> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@3d19e607,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319564"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.82:42779: output error
>>>> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:07,181
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
>>>> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@5920511b,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322014"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.88:49489: output error
>>>> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:08,064 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 1
>>>> regions to close
>>>> 2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
>>>> org.apache.hadoop.hbase.regionserver.Leases:
>>>> regionserver60020.leaseChecker closing leases
>>>> 2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
>>>> org.apache.hadoop.hbase.regionserver.Leases:
>>>> regionserver60020.leaseChecker closed leases
>>>> 2012-10-01 19:50:08,508 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
>>>> primary datanode 10.100.101.156:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:08,508 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.101.156:50010 failed 1 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>>>> 2012-10-01 19:50:09,652 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
>>>> primary datanode 10.100.101.156:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:09,653 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.101.156:50010 failed 2 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>>>> 2012-10-01 19:50:10,697 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
>>>> primary datanode 10.100.101.156:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:10,697 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.101.156:50010 failed 3 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>>>> 2012-10-01 19:50:12,278 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
>>>> primary datanode 10.100.101.156:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:12,279 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.101.156:50010 failed 4 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>>>> 2012-10-01 19:50:13,294 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
>>>> primary datanode 10.100.101.156:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:13,294 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.101.156:50010 failed 5 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>>>> 2012-10-01 19:50:14,306 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
>>>> primary datanode 10.100.101.156:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:14,306 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.101.156:50010 failed 6 times.  Pipeline was
>>>> 10.100.101.156:50010, 10.100.102.88:50010. Marking primary datanode as
>>>> bad.
>>>> 2012-10-01 19:50:15,317 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
>>>> primary datanode 10.100.102.88:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:15,318 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 1 times.  Pipeline was
>>>> 10.100.102.88:50010. Will retry...
>>>> 2012-10-01 19:50:16,375 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
>>>> primary datanode 10.100.102.88:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:16,376 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 2 times.  Pipeline was
>>>> 10.100.102.88:50010. Will retry...
>>>> 2012-10-01 19:50:17,385 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
>>>> primary datanode 10.100.102.88:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:17,385 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 3 times.  Pipeline was
>>>> 10.100.102.88:50010. Will retry...
>>>> 2012-10-01 19:50:18,395 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
>>>> primary datanode 10.100.102.88:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:18,395 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 4 times.  Pipeline was
>>>> 10.100.102.88:50010. Will retry...
>>>> 2012-10-01 19:50:19,404 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
>>>> primary datanode 10.100.102.88:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:19,405 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 5 times.  Pipeline was
>>>> 10.100.102.88:50010. Will retry...
>>>> 2012-10-01 19:50:20,414 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
>>>> primary datanode 10.100.102.88:50010
>>>> org.apache.hadoop.ipc.RemoteException:
>>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>>> null.
>>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>>>
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:20,415 [DataStreamer for file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> block blk_5535637699691880681_51616301] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>> 2012-10-01 19:50:20,415 [IPC Server handler 58 on 60020] ERROR
>>>> org.apache.hadoop.hdfs.DFSClient: Exception closing file
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>>> : java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>> java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:20,415 [IPC Server handler 69 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>>> java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:20,415 [regionserver60020.logSyncer] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>>> java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>>> java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] INFO
>>>> org.apache.hadoop.fs.FileSystem: Could not cancel cleanup thread,
>>>> though no FileSystems are open
>>>> 2012-10-01 19:50:20,417 [regionserver60020.logSyncer] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>>> java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:20,417 [regionserver60020.logSyncer] FATAL
>>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
>>>> Requesting close of hlog
>>>> java.io.IOException: Reflection
>>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>>>   at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.lang.reflect.InvocationTargetException
>>>>   at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>>>   ... 4 more
>>>> Caused by: java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:20,418 [regionserver60020.logSyncer] ERROR
>>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
>>>> requesting close of hlog
>>>> java.io.IOException: Reflection
>>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>>>   at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.lang.reflect.InvocationTargetException
>>>>   at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>>>   ... 4 more
>>>> Caused by: java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
>>>> Requesting close of hlog
>>>> java.io.IOException: Reflection
>>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.append(HLog.java:1033)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:1852)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1723)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3076)
>>>>   at sun.reflect.GeneratedMethodAccessor96.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.lang.reflect.InvocationTargetException
>>>>   at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>>>   ... 11 more
>>>> Caused by: java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:20,417 [IPC Server handler 29 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>>> System not available
>>>> java.io.IOException: File system is not available
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   ... 9 more
>>>> Caused by: java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   ... 21 more
>>>> 2012-10-01 19:50:20,417 [IPC Server handler 24 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>>> System not available
>>>> java.io.IOException: File system is not available
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   ... 9 more
>>>> Caused by: java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   ... 21 more
>>>> 2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>>> abort: loaded coprocessors are: []
>>>> 2012-10-01 19:50:20,417 [IPC Server handler 1 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>>> System not available
>>>> java.io.IOException: File system is not available
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   ... 9 more
>>>> Caused by: java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   ... 21 more
>>>> 2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>>> abort: loaded coprocessors are: []
>>>> 2012-10-01 19:50:20,417 [IPC Server handler 25 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>>> System not available
>>>> java.io.IOException: File system is not available
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   ... 9 more
>>>> Caused by: java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   ... 21 more
>>>> 2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>>> abort: loaded coprocessors are: []
>>>> 2012-10-01 19:50:20,417 [IPC Server handler 90 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>>> System not available
>>>> java.io.IOException: File system is not available
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   ... 9 more
>>>> Caused by: java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   ... 21 more
>>>> 2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>>> abort: loaded coprocessors are: []
>>>> 2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>>> System not available
>>>> java.io.IOException: File system is not available
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   ... 9 more
>>>> Caused by: java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   ... 21 more
>>>> 2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>>> abort: loaded coprocessors are: []
>>>> 2012-10-01 19:50:20,417 [IPC Server handler 17 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>>> System not available
>>>> java.io.IOException: File system is not available
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>>   ... 9 more
>>>> Caused by: java.lang.InterruptedException
>>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>>   ... 21 more
>>>> 2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>>> abort: loaded coprocessors are: []
>>>> 2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>>> hdfsBlocksLocalityIndex=97
>>>> 2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>>> hdfsBlocksLocalityIndex=97
>>>> 2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>>> hdfsBlocksLocalityIndex=97
>>>> 2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>>> hdfsBlocksLocalityIndex=97
>>>> 2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>>> hdfsBlocksLocalityIndex=97
>>>> 2012-10-01 19:50:20,420 [IPC Server handler 69 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: (responseTooSlow):
>>>> {"processingtimems":22039,"call":"multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb),
>>>> rpc version=1, client version=29,
>>>> methodsFingerPrint=54742778","client":"10.100.102.155:39852","starttimems":1349120998380,"queuetimems":0,"class":"HRegionServer","responsesize":0,"method":"multi"}
>>>> 2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1575,
>>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>>> hdfsBlocksLocalityIndex=97
>>>> 2012-10-01 19:50:20,420
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING
>>>> region server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
>>>> Unrecoverable exception while closing region
>>>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
>>>> still finishing close
>>>> java.io.IOException: Filesystem closed
>>>>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
>>>>   at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
>>>>   at java.io.FilterInputStream.close(FilterInputStream.java:155)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
>>>>   at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
>>>>   at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
>>>>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>   at java.lang.Thread.run(Thread.java:662)
>>>> 2012-10-01 19:50:20,426
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>>> abort: loaded coprocessors are: []
>>>> 2012-10-01 19:50:20,419 [IPC Server handler 29 on 60020] FATAL
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>>> abort: loaded coprocessors are: []
>>>> 2012-10-01 19:50:20,426
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
>>>> metrics: requestsPerSecond=0, numberOfOnlineRegions=136,
>>>> numberOfStores=136, numberOfStorefiles=189, storefileIndexSizeMB=15,
>>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
>>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>>> hdfsBlocksLocalityIndex=97
>>>> 2012-10-01 19:50:20,426 [IPC Server handler 29 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
>>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>>> hdfsBlocksLocalityIndex=97
>>>> 2012-10-01 19:50:20,445 [IPC Server handler 58 on 60020] WARN
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>>> fatal error to master
>>>> java.lang.reflect.UndeclaredThrowableException
>>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Call to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>>> local exception: java.nio.channels.ClosedChannelException
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>>   ... 11 more
>>>> Caused by: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>>> 2012-10-01 19:50:20,446 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>>> fatal error to master
>>>> java.lang.reflect.UndeclaredThrowableException
>>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Call to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>>> local exception: java.nio.channels.ClosedByInterruptException
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>>   ... 11 more
>>>> Caused by: java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>>>   at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>>>>   at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>>>>   at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>>>>   at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>>>>   at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
>>>>   ... 12 more
>>>> 2012-10-01 19:50:20,447 [IPC Server handler 29 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>>> System not available
>>>> 2012-10-01 19:50:20,446 [IPC Server handler 58 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>>> System not available
>>>> 2012-10-01 19:50:20,446 [IPC Server handler 17 on 60020] WARN
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>>> fatal error to master
>>>> java.lang.reflect.UndeclaredThrowableException
>>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:328)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:362)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1045)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:897)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>>   ... 11 more
>>>> 2012-10-01 19:50:20,448 [IPC Server handler 17 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>>> System not available
>>>> 2012-10-01 19:50:20,445 [IPC Server handler 1 on 60020] WARN
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>>> fatal error to master
>>>> java.lang.reflect.UndeclaredThrowableException
>>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Call to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>>> local exception: java.nio.channels.ClosedChannelException
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>>   ... 11 more
>>>> Caused by: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>>> 2012-10-01 19:50:20,448 [IPC Server handler 1 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>>> System not available
>>>> 2012-10-01 19:50:20,445
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to
>>>> report fatal error to master
>>>> java.lang.reflect.UndeclaredThrowableException
>>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>>   at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:131)
>>>>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>   at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.io.IOException: Call to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>>> local exception: java.nio.channels.ClosedByInterruptException
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>>   ... 7 more
>>>> Caused by: java.nio.channels.ClosedByInterruptException
>>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>>>   at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>>>>   at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>>>>   at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>>>>   at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>>>>   at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> 2012-10-01 19:50:20,450
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
>>>> Unrecoverable exception while closing region
>>>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
>>>> still finishing close
>>>> 2012-10-01 19:50:20,445 [IPC Server handler 69 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb), rpc
>>>> version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.155:39852: output error
>>>> 2012-10-01 19:50:20,445 [IPC Server handler 24 on 60020] WARN
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>>> fatal error to master
>>>> java.lang.reflect.UndeclaredThrowableException
>>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Call to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>>> local exception: java.nio.channels.ClosedChannelException
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>>   ... 11 more
>>>> Caused by: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>>> 2012-10-01 19:50:20,451 [IPC Server handler 24 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>>> System not available
>>>> 2012-10-01 19:50:20,445 [IPC Server handler 90 on 60020] WARN
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>>> fatal error to master
>>>> java.lang.reflect.UndeclaredThrowableException
>>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Call to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>>> local exception: java.nio.channels.ClosedChannelException
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>>   ... 11 more
>>>> Caused by: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>>> 2012-10-01 19:50:20,451 [IPC Server handler 90 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>>> System not available
>>>> 2012-10-01 19:50:20,445 [IPC Server handler 25 on 60020] WARN
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>>> fatal error to master
>>>> java.lang.reflect.UndeclaredThrowableException
>>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>>> Caused by: java.io.IOException: Call to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>>> local exception: java.nio.channels.ClosedChannelException
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>>   ... 11 more
>>>> Caused by: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>>> 2012-10-01 19:50:20,452 [IPC Server handler 25 on 60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>>> System not available
>>>> 2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@5d72e577,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321312"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.184:34111: output error
>>>> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@2237178f,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316983"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.188:59581: output error
>>>> 2012-10-01 19:50:20,450 [IPC Server handler 69 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:20,450
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>>> ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable
>>>> while processing event M_RS_CLOSE_REGION
>>>> java.lang.RuntimeException: java.io.IOException: Filesystem closed
>>>>   at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:133)
>>>>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>   at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.io.IOException: Filesystem closed
>>>>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
>>>>   at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
>>>>   at java.io.FilterInputStream.close(FilterInputStream.java:155)
>>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
>>>>   at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
>>>>   at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
>>>>   at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
>>>>   ... 4 more
>>>> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@573dba6d,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"0032027"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.183:60076: output error
>>>> 2012-10-01 19:50:20,452 [IPC Server handler 69 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@4eebbed5,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317054"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.146:40240: output error
>>>> 2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:20,453 [IPC Server handler 29 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@4ff0ed4a,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00318964"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.172:53924: output error
>>>> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@526abe46,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316914"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.101.184:34110: output error
>>>> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>>> get([B@5df20fef,
>>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319173"}),
>>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>>> 10.100.102.146:40243: output error
>>>> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020
>>>> caught: java.nio.channels.ClosedChannelException
>>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>>>
>>>> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
>>>> exiting
>>>> 2012-10-01 19:50:21,066
>>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>>>> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>>> java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
>>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>>> java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] FATAL
>>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
>>>> Requesting close of hlog
>>>> java.io.IOException: Reflection
>>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>>>   at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.lang.reflect.InvocationTargetException
>>>>   at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>>>   ... 4 more
>>>> Caused by: java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:21,419 [regionserver60020.logSyncer] ERROR
>>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
>>>> requesting close of hlog
>>>> java.io.IOException: Reflection
>>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>>>   at java.lang.Thread.run(Thread.java:662)
>>>> Caused by: java.lang.reflect.InvocationTargetException
>>>>   at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>>>   ... 4 more
>>>> Caused by: java.io.IOException: Error Recovery for block
>>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>>> 10.100.102.88:50010. Aborting...
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>>> 2012-10-01 19:50:22,066 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
>>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; all regions
>>>> closed.
>>>> 2012-10-01 19:50:22,066 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closing
>>>> leases
>>>> 2012-10-01 19:50:22,066 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closed
>>>> leases
>>>> 2012-10-01 19:50:22,082 [regionserver60020] WARN
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed deleting my
>>>> ephemeral node
>>>> org.apache.zookeeper.KeeperException$SessionExpiredException:
>>>> KeeperErrorCode = Session expired for
>>>> /hbase/rs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
>>>>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
>>>>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>>>>   at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:868)
>>>>   at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:107)
>>>>   at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:962)
>>>>   at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:951)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:964)
>>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:762)
>>>>   at java.lang.Thread.run(Thread.java:662)
>>>> 2012-10-01 19:50:22,082 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
>>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; zookeeper
>>>> connection closed.
>>>> 2012-10-01 19:50:22,082 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: regionserver60020
>>>> exiting
>>>> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
>>>> starting; hbase.shutdown.hook=true;
>>>> fsShutdownHook=Thread[Thread-5,5,main]
>>>> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown
>>>> hook
>>>> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs
>>>> shutdown hook thread.
>>>> 2012-10-01 19:50:22,124 [Shutdownhook:regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
>>>> finished.
>>>> Mon Oct  1 19:54:10 UTC 2012 Starting regionserver on
>>>> data3024.ngpipes.milp.ngmoco.com
>>>> core file size          (blocks, -c) 0
>>>> data seg size           (kbytes, -d) unlimited
>>>> scheduling priority             (-e) 20
>>>> file size               (blocks, -f) unlimited
>>>> pending signals                 (-i) 16382
>>>> max locked memory       (kbytes, -l) 64
>>>> max memory size         (kbytes, -m) unlimited
>>>> open files                      (-n) 32768
>>>> pipe size            (512 bytes, -p) 8
>>>> POSIX message queues     (bytes, -q) 819200
>>>> real-time priority              (-r) 0
>>>> stack size              (kbytes, -s) 8192
>>>> cpu time               (seconds, -t) unlimited
>>>> max user processes              (-u) unlimited
>>>> virtual memory          (kbytes, -v) unlimited
>>>> file locks                      (-x) unlimited
>>>> 2012-10-01 19:54:11,355 [main] INFO
>>>> org.apache.hadoop.hbase.util.VersionInfo: HBase 0.92.1
>>>> 2012-10-01 19:54:11,356 [main] INFO
>>>> org.apache.hadoop.hbase.util.VersionInfo: Subversion
>>>> https://svn.apache.org/repos/asf/hbase/branches/0.92 -r 1298924
>>>> 2012-10-01 19:54:11,356 [main] INFO
>>>> org.apache.hadoop.hbase.util.VersionInfo: Compiled by jenkins on Fri
>>>> Mar  9 16:58:34 UTC 2012
>>>> 2012-10-01 19:54:11,513 [main] INFO
>>>> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java
>>>> HotSpot(TM) 64-Bit Server VM, vmVendor=Sun Microsystems Inc.,
>>>> vmVersion=20.1-b02
>>>> 2012-10-01 19:54:11,513 [main] INFO
>>>> org.apache.hadoop.hbase.util.ServerCommandLine:
>>>> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx4000m,
>>>> -XX:NewSize=128m, -XX:MaxNewSize=128m,
>>>> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
>>>> -XX:CMSInitiatingOccupancyFraction=75, -verbose:gc,
>>>> -XX:+PrintGCDetails, -XX:+PrintGCTimeStamps,
>>>> -Xloggc:/data2/hbase_log/gc-hbase.log,
>>>> -Dcom.sun.management.jmxremote.authenticate=true,
>>>> -Dcom.sun.management.jmxremote.ssl=false,
>>>> -Dcom.sun.management.jmxremote.password.file=/home/hadoop/hadoop/conf/jmxremote.password,
>>>> -Dcom.sun.management.jmxremote.port=8010,
>>>> -Dhbase.log.dir=/data2/hbase_log,
>>>> -Dhbase.log.file=hbase-hadoop-regionserver-data3024.ngpipes.milp.ngmoco.com.log,
>>>> -Dhbase.home.dir=/home/hadoop/hbase, -Dhbase.id.str=hadoop,
>>>> -Dhbase.root.logger=INFO,DRFA,
>>>> -Djava.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64]
>>>> 2012-10-01 19:54:11,964 [IPC Reader 0 on port 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>>> 2012-10-01 19:54:11,967 [IPC Reader 1 on port 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>>> 2012-10-01 19:54:11,970 [IPC Reader 2 on port 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>>> 2012-10-01 19:54:11,973 [IPC Reader 3 on port 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>>> 2012-10-01 19:54:11,976 [IPC Reader 4 on port 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>>> 2012-10-01 19:54:11,979 [IPC Reader 5 on port 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>>> 2012-10-01 19:54:11,982 [IPC Reader 6 on port 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>>> 2012-10-01 19:54:11,985 [IPC Reader 7 on port 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>>> 2012-10-01 19:54:11,988 [IPC Reader 8 on port 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>>> 2012-10-01 19:54:11,991 [IPC Reader 9 on port 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>>> 2012-10-01 19:54:12,002 [main] INFO
>>>> org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics
>>>> with hostName=HRegionServer, port=60020
>>>> 2012-10-01 19:54:12,081 [main] INFO
>>>> org.apache.hadoop.hbase.io.hfile.CacheConfig: Allocating LruBlockCache
>>>> with maximum size 996.8m
>>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client
>>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48
>>>> GMT
>>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client
>>>> environment:host.name=data3024.ngpipes.milp.ngmoco.com
>>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client
>>>> environment:java.version=1.6.0_26
>>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun
>>>> Microsystems Inc.
>>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client
>>>> environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre
>>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client
>>>> environment:java.class.path=/home/hadoop/hbase/conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase:/home/hadoop/hbase/hbase-0.92.1.jar:/home/hadoop/hbase/hbase-0.92.1-tests.jar:/home/hadoop/hbase/lib/activation-1.1.jar:/home/hadoop/hbase/lib/asm-3.1.jar:/home/hadoop/hbase/lib/avro-1.5.3.jar:/home/hadoop/hbase/lib/avro-ipc-1.5.3.jar:/home/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hbase/lib/commons-cli-1.2.jar:/home/hadoop/hbase/lib/commons-codec-1.4.jar:/home/hadoop/hbase/lib/commons-collections-3.2.1.jar:/home/hadoop/hbase/lib/commons-configuration-1.6.jar:/home/hadoop/hbase/lib/commons-digester-1.8.jar:/home/hadoop/hbase/lib/commons-el-1.0.jar:/home/hadoop/hbase/lib/commons-httpclient-3.1.jar:/home/hadoop/hbase/lib/commons-lang-2.5.jar:/home/hadoop/hbase/lib/commons-logging-1.1.1.jar:/home/hadoop/hbase/lib/commons-math-2.1.jar:/home/hadoop/hbase/lib/commons-net-1.4.1.jar:/home/hadoop/hbase/lib/core-3.1.1.jar:/home/hadoop/hbase/lib/guava-r09.jar:/home/hadoop/hbase/lib/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hbase/lib/hadoop-lzo-0.4.9.jar:/home/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/home/hadoop/hbase/lib/httpclient-4.0.1.jar:/home/hadoop/hbase/lib/httpcore-4.0.1.jar:/home/hadoop/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/home/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase/lib/jaxb-api-2.1.jar:/home/hadoop/hbase/lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase/lib/jersey-core-1.4.jar:/home/hadoop/hbase/lib/jersey-json-1.4.jar:/home/hadoop/hbase/lib/jersey-server-1.4.jar:/home/hadoop/hbase/lib/jettison-1.1.jar:/home/hadoop/hbase/lib/jetty-6.1.26.jar:/home/hadoop/hbase/lib/jetty-util-6.1.26.jar:/home/hadoop/hbase/lib/jruby-complete-1.6.5.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/libthrift-0.7.0.jar:/home/hadoop/hbase/lib/log4j-1.2.16.jar:/home/hadoop/hbase/lib/netty-3.2.4.Final.jar:/home/hadoop/hbase/lib/protobuf-java-2.4.0a.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/servlet-api-2.5.jar:/home/hadoop/hbase/lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/home/hadoop/hbase/lib/stax-api-1.0.1.jar:/home/hadoop/hbase/lib/velocity-1.7.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/zookeeper-3.4.3.jar:
>>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client
>>>> environment:java.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64
>>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
>>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
>>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
>>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
>>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client
>>>> environment:os.version=2.6.35-30-generic
>>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client environment:user.name=hadoop
>>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client
>>>> environment:user.home=/home/hadoop/
>>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Client
>>>> environment:user.dir=/home/gregross
>>>> 2012-10-01 19:54:12,225 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Initiating client connection,
>>>> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
>>>> sessionTimeout=180000 watcher=regionserver:60020
>>>> 2012-10-01 19:54:12,251 [regionserver60020-SendThread()] INFO
>>>> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
>>>> /10.100.102.197:2181
>>>> 2012-10-01 19:54:12,252 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
>>>> of this process is 15403@data3024.ngpipes.milp.ngmoco.com
>>>> 2012-10-01 19:54:12,259
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>>>> SASL-authenticate because the default JAAS configuration section
>>>> 'Client' could not be found. If you are not using SASL, you may ignore
>>>> this. On the other hand, if you expected SASL to work, please fix your
>>>> JAAS configuration.
>>>> 2012-10-01 19:54:12,260
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>>>> session
>>>> 2012-10-01 19:54:12,272
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>>>> server; r-o mode will be unavailable
>>>> 2012-10-01 19:54:12,273
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
>>>> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
>>>> sessionid = 0x137ec64373dd4b5, negotiated timeout = 40000
>>>> 2012-10-01 19:54:12,289 [main] INFO
>>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown
>>>> hook thread: Shutdownhook:regionserver60020
>>>> 2012-10-01 19:54:12,352 [regionserver60020] INFO
>>>> org.apache.zookeeper.ZooKeeper: Initiating client connection,
>>>> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
>>>> sessionTimeout=180000 watcher=hconnection
>>>> 2012-10-01 19:54:12,353 [regionserver60020-SendThread()] INFO
>>>> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
>>>> /10.100.102.197:2181
>>>> 2012-10-01 19:54:12,353 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
>>>> of this process is 15403@data3024.ngpipes.milp.ngmoco.com
>>>> 2012-10-01 19:54:12,354
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>>>> SASL-authenticate because the default JAAS configuration section
>>>> 'Client' could not be found. If you are not using SASL, you may ignore
>>>> this. On the other hand, if you expected SASL to work, please fix your
>>>> JAAS configuration.
>>>> 2012-10-01 19:54:12,354
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>>>> session
>>>> 2012-10-01 19:54:12,361
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>>>> server; r-o mode will be unavailable
>>>> 2012-10-01 19:54:12,361
>>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>>> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
>>>> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
>>>> sessionid = 0x137ec64373dd4b6, negotiated timeout = 40000
>>>> 2012-10-01 19:54:12,384 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
>>>> globalMemStoreLimit=1.6g, globalMemStoreLimitLowMark=1.4g,
>>>> maxHeap=3.9g
>>>> 2012-10-01 19:54:12,400 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Runs every 2hrs,
>>>> 46mins, 40sec
>>>> 2012-10-01 19:54:12,420 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect
>>>> to Master server at
>>>> namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915
>>>> 2012-10-01 19:54:12,453 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to
>>>> master at data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020
>>>> 2012-10-01 19:54:12,453 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at
>>>> namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915 that we are
>>>> up with port=60020, startcode=1349121252040
>>>> 2012-10-01 19:54:12,476 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us
>>>> hostname to use. Was=data3024.ngpipes.milp.ngmoco.com,
>>>> Now=data3024.ngpipes.milp.ngmoco.com
>>>> 2012-10-01 19:54:12,568 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.wal.HLog: HLog configuration:
>>>> blocksize=64 MB, rollsize=60.8 MB, enabled=true,
>>>> optionallogflushinternal=1000ms
>>>> 2012-10-01 19:54:12,642 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.wal.HLog:  for
>>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1349121252040/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1349121252040.1349121252569
>>>> 2012-10-01 19:54:12,643 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Using
>>>> getNumCurrentReplicas--HDFS-826
>>>> 2012-10-01 19:54:12,651 [regionserver60020] INFO
>>>> org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
>>>> with processName=RegionServer, sessionId=regionserver60020
>>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: MetricsString added: revision
>>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser
>>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate
>>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl
>>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: MetricsString added: date
>>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision
>>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: MetricsString added: user
>>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion
>>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: MetricsString added: url
>>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: MetricsString added: version
>>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: new MBeanInfo
>>>> 2012-10-01 19:54:12,657 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.metrics: new MBeanInfo
>>>> 2012-10-01 19:54:12,657 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics:
>>>> Initialized
>>>> 2012-10-01 19:54:12,722 [regionserver60020] INFO org.mortbay.log:
>>>> Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>>> org.mortbay.log.Slf4jLog
>>>> 2012-10-01 19:54:12,774 [regionserver60020] INFO
>>>> org.apache.hadoop.http.HttpServer: Added global filtersafety
>>>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>>>> 2012-10-01 19:54:12,787 [regionserver60020] INFO
>>>> org.apache.hadoop.http.HttpServer: Port returned by
>>>> webServer.getConnectors()[0].getLocalPort() before open() is -1.
>>>> Opening the listener on 60030
>>>> 2012-10-01 19:54:12,787 [regionserver60020] INFO
>>>> org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned
>>>> 60030 webServer.getConnectors()[0].getLocalPort() returned 60030
>>>> 2012-10-01 19:54:12,787 [regionserver60020] INFO
>>>> org.apache.hadoop.http.HttpServer: Jetty bound to port 60030
>>>> 2012-10-01 19:54:12,787 [regionserver60020] INFO org.mortbay.log: jetty-6.1.26
>>>> 2012-10-01 19:54:13,079 [regionserver60020] INFO org.mortbay.log:
>>>> Started SelectChannelConnector@0.0.0.0:60030
>>>> 2012-10-01 19:54:13,079 [IPC Server Responder] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting
>>>> 2012-10-01 19:54:13,079 [IPC Server listener on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,094 [IPC Server handler 0 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,094 [IPC Server handler 1 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,095 [IPC Server handler 2 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,095 [IPC Server handler 3 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,095 [IPC Server handler 4 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,095 [IPC Server handler 5 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,095 [IPC Server handler 6 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,095 [IPC Server handler 7 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,095 [IPC Server handler 8 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,096 [IPC Server handler 9 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,096 [IPC Server handler 10 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,096 [IPC Server handler 11 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,096 [IPC Server handler 12 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,096 [IPC Server handler 13 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,096 [IPC Server handler 14 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,097 [IPC Server handler 15 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,097 [IPC Server handler 16 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,097 [IPC Server handler 17 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,097 [IPC Server handler 18 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,098 [IPC Server handler 19 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,098 [IPC Server handler 20 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,098 [IPC Server handler 21 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,098 [IPC Server handler 22 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,098 [IPC Server handler 23 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,098 [IPC Server handler 24 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,098 [IPC Server handler 25 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,099 [IPC Server handler 26 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,099 [IPC Server handler 27 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,099 [IPC Server handler 28 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,100 [IPC Server handler 29 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,101 [IPC Server handler 30 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,101 [IPC Server handler 31 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,101 [IPC Server handler 32 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,101 [IPC Server handler 33 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,101 [IPC Server handler 34 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,102 [IPC Server handler 35 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,102 [IPC Server handler 36 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,102 [IPC Server handler 37 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,102 [IPC Server handler 38 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,102 [IPC Server handler 39 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,102 [IPC Server handler 40 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,103 [IPC Server handler 41 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,103 [IPC Server handler 42 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,103 [IPC Server handler 43 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,103 [IPC Server handler 44 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,103 [IPC Server handler 45 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,103 [IPC Server handler 46 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,103 [IPC Server handler 47 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,103 [IPC Server handler 48 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,104 [IPC Server handler 49 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,104 [IPC Server handler 50 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,104 [IPC Server handler 51 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,104 [IPC Server handler 52 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,104 [IPC Server handler 53 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,104 [IPC Server handler 54 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,104 [IPC Server handler 55 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,104 [IPC Server handler 56 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,104 [IPC Server handler 57 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,105 [IPC Server handler 58 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,105 [IPC Server handler 59 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,105 [IPC Server handler 60 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,105 [IPC Server handler 61 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,105 [IPC Server handler 62 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,105 [IPC Server handler 63 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,105 [IPC Server handler 64 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,105 [IPC Server handler 65 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,105 [IPC Server handler 66 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,106 [IPC Server handler 67 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,106 [IPC Server handler 68 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,106 [IPC Server handler 69 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,106 [IPC Server handler 70 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,106 [IPC Server handler 71 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,106 [IPC Server handler 72 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,106 [IPC Server handler 73 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,106 [IPC Server handler 74 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,107 [IPC Server handler 75 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,107 [IPC Server handler 76 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,107 [IPC Server handler 77 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,107 [IPC Server handler 78 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,107 [IPC Server handler 79 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,107 [IPC Server handler 80 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,107 [IPC Server handler 81 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,107 [IPC Server handler 82 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,107 [IPC Server handler 83 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,108 [IPC Server handler 84 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,108 [IPC Server handler 85 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,108 [IPC Server handler 86 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,108 [IPC Server handler 87 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,108 [IPC Server handler 88 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,108 [IPC Server handler 89 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,109 [IPC Server handler 90 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,109 [IPC Server handler 91 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,109 [IPC Server handler 92 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,109 [IPC Server handler 93 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,109 [IPC Server handler 94 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,109 [IPC Server handler 95 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,110 [IPC Server handler 96 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,110 [IPC Server handler 97 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,110 [IPC Server handler 98 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,110 [IPC Server handler 99 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,110 [PRI IPC Server handler 0 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,110 [PRI IPC Server handler 1 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,110 [PRI IPC Server handler 2 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 3 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 4 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 5 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 6 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 7 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 8 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 9 on 60020] INFO
>>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
>>>> starting
>>>> 2012-10-01 19:54:13,124 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as
>>>> data3024.ngpipes.milp.ngmoco.com,60020,1349121252040, RPC listening on
>>>> data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020,
>>>> sessionid=0x137ec64373dd4b5
>>>> 2012-10-01 19:54:13,124
>>>> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1349121252040]
>>>> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
>>>> SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1349121252040
>>>> starting
>>>> 2012-10-01 19:54:13,125 [regionserver60020] INFO
>>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered
>>>> RegionServer MXBean
>>>>
>>>> GC log
>>>> ======
>>>>
>>>> 1.914: [GC 1.914: [ParNew: 99976K->7646K(118016K), 0.0087130 secs]
>>>> 99976K->7646K(123328K), 0.0088110 secs] [Times: user=0.07 sys=0.00,
>>>> real=0.00 secs]
>>>> 416.341: [GC 416.341: [ParNew: 112558K->12169K(118016K), 0.0447760
>>>> secs] 112558K->25025K(133576K), 0.0450080 secs] [Times: user=0.13
>>>> sys=0.02, real=0.05 secs]
>>>> 416.386: [GC [1 CMS-initial-mark: 12855K(15560K)] 25089K(133576K),
>>>> 0.0037570 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 416.390: [CMS-concurrent-mark-start]
>>>> 416.407: [CMS-concurrent-mark: 0.015/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 416.407: [CMS-concurrent-preclean-start]
>>>> 416.408: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 416.408: [GC[YG occupancy: 12233 K (118016 K)]416.408: [Rescan
>>>> (parallel) , 0.0074970 secs]416.416: [weak refs processing, 0.0000370
>>>> secs] [1 CMS-remark: 12855K(15560K)] 25089K(133576K), 0.0076480 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 416.416: [CMS-concurrent-sweep-start]
>>>> 416.419: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 416.419: [CMS-concurrent-reset-start]
>>>> 416.467: [CMS-concurrent-reset: 0.049/0.049 secs] [Times: user=0.01
>>>> sys=0.04, real=0.05 secs]
>>>> 418.468: [GC [1 CMS-initial-mark: 12855K(21428K)] 26216K(139444K),
>>>> 0.0037020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 418.471: [CMS-concurrent-mark-start]
>>>> 418.487: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 418.487: [CMS-concurrent-preclean-start]
>>>> 418.488: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 418.488: [GC[YG occupancy: 13360 K (118016 K)]418.488: [Rescan
>>>> (parallel) , 0.0090770 secs]418.497: [weak refs processing, 0.0000170
>>>> secs] [1 CMS-remark: 12855K(21428K)] 26216K(139444K), 0.0092220 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>>> 418.497: [CMS-concurrent-sweep-start]
>>>> 418.500: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 418.500: [CMS-concurrent-reset-start]
>>>> 418.511: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 420.512: [GC [1 CMS-initial-mark: 12854K(21428K)] 26344K(139444K),
>>>> 0.0041050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 420.516: [CMS-concurrent-mark-start]
>>>> 420.532: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
>>>> sys=0.01, real=0.01 secs]
>>>> 420.532: [CMS-concurrent-preclean-start]
>>>> 420.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 420.533: [GC[YG occupancy: 13489 K (118016 K)]420.533: [Rescan
>>>> (parallel) , 0.0014850 secs]420.534: [weak refs processing, 0.0000130
>>>> secs] [1 CMS-remark: 12854K(21428K)] 26344K(139444K), 0.0015920 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 420.534: [CMS-concurrent-sweep-start]
>>>> 420.537: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 420.537: [CMS-concurrent-reset-start]
>>>> 420.548: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 422.437: [GC [1 CMS-initial-mark: 12854K(21428K)] 28692K(139444K),
>>>> 0.0051030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 422.443: [CMS-concurrent-mark-start]
>>>> 422.458: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 422.458: [CMS-concurrent-preclean-start]
>>>> 422.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 422.458: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 427.541:
>>>> [CMS-concurrent-abortable-preclean: 0.678/5.083 secs] [Times:
>>>> user=0.66 sys=0.00, real=5.08 secs]
>>>> 427.541: [GC[YG occupancy: 16198 K (118016 K)]427.541: [Rescan
>>>> (parallel) , 0.0013750 secs]427.543: [weak refs processing, 0.0000140
>>>> secs] [1 CMS-remark: 12854K(21428K)] 29053K(139444K), 0.0014800 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 427.543: [CMS-concurrent-sweep-start]
>>>> 427.544: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 427.544: [CMS-concurrent-reset-start]
>>>> 427.557: [CMS-concurrent-reset: 0.013/0.013 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 429.557: [GC [1 CMS-initial-mark: 12854K(21428K)] 30590K(139444K),
>>>> 0.0043280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 429.562: [CMS-concurrent-mark-start]
>>>> 429.574: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
>>>> sys=0.00, real=0.02 secs]
>>>> 429.574: [CMS-concurrent-preclean-start]
>>>> 429.575: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 429.575: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 434.626:
>>>> [CMS-concurrent-abortable-preclean: 0.747/5.051 secs] [Times:
>>>> user=0.74 sys=0.00, real=5.05 secs]
>>>> 434.626: [GC[YG occupancy: 18154 K (118016 K)]434.626: [Rescan
>>>> (parallel) , 0.0015440 secs]434.627: [weak refs processing, 0.0000140
>>>> secs] [1 CMS-remark: 12854K(21428K)] 31009K(139444K), 0.0016500 secs]
>>>> [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 434.628: [CMS-concurrent-sweep-start]
>>>> 434.629: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 434.629: [CMS-concurrent-reset-start]
>>>> 434.641: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 436.641: [GC [1 CMS-initial-mark: 12854K(21428K)] 31137K(139444K),
>>>> 0.0043440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 436.646: [CMS-concurrent-mark-start]
>>>> 436.660: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 436.660: [CMS-concurrent-preclean-start]
>>>> 436.661: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 436.661: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 441.773:
>>>> [CMS-concurrent-abortable-preclean: 0.608/5.112 secs] [Times:
>>>> user=0.60 sys=0.00, real=5.11 secs]
>>>> 441.773: [GC[YG occupancy: 18603 K (118016 K)]441.773: [Rescan
>>>> (parallel) , 0.0024270 secs]441.776: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12854K(21428K)] 31458K(139444K), 0.0025200 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 441.776: [CMS-concurrent-sweep-start]
>>>> 441.777: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 441.777: [CMS-concurrent-reset-start]
>>>> 441.788: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 443.788: [GC [1 CMS-initial-mark: 12854K(21428K)] 31586K(139444K),
>>>> 0.0044590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 443.793: [CMS-concurrent-mark-start]
>>>> 443.804: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.04
>>>> sys=0.00, real=0.02 secs]
>>>> 443.804: [CMS-concurrent-preclean-start]
>>>> 443.805: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 443.805: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 448.821:
>>>> [CMS-concurrent-abortable-preclean: 0.813/5.016 secs] [Times:
>>>> user=0.81 sys=0.00, real=5.01 secs]
>>>> 448.822: [GC[YG occupancy: 19052 K (118016 K)]448.822: [Rescan
>>>> (parallel) , 0.0013990 secs]448.823: [weak refs processing, 0.0000140
>>>> secs] [1 CMS-remark: 12854K(21428K)] 31907K(139444K), 0.0015040 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 448.823: [CMS-concurrent-sweep-start]
>>>> 448.825: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 448.825: [CMS-concurrent-reset-start]
>>>> 448.837: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 450.837: [GC [1 CMS-initial-mark: 12854K(21428K)] 32035K(139444K),
>>>> 0.0044510 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 450.842: [CMS-concurrent-mark-start]
>>>> 450.857: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 450.857: [CMS-concurrent-preclean-start]
>>>> 450.858: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 450.858: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 455.922:
>>>> [CMS-concurrent-abortable-preclean: 0.726/5.064 secs] [Times:
>>>> user=0.73 sys=0.00, real=5.06 secs]
>>>> 455.922: [GC[YG occupancy: 19542 K (118016 K)]455.922: [Rescan
>>>> (parallel) , 0.0016050 secs]455.924: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12854K(21428K)] 32397K(139444K), 0.0017340 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>>>> 455.924: [CMS-concurrent-sweep-start]
>>>> 455.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 455.927: [CMS-concurrent-reset-start]
>>>> 455.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 457.936: [GC [1 CMS-initial-mark: 12854K(21428K)] 32525K(139444K),
>>>> 0.0026740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 457.939: [CMS-concurrent-mark-start]
>>>> 457.950: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 457.950: [CMS-concurrent-preclean-start]
>>>> 457.950: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 457.950: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 463.065:
>>>> [CMS-concurrent-abortable-preclean: 0.708/5.115 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.12 secs]
>>>> 463.066: [GC[YG occupancy: 19991 K (118016 K)]463.066: [Rescan
>>>> (parallel) , 0.0013940 secs]463.067: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12854K(21428K)] 32846K(139444K), 0.0015000 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 463.067: [CMS-concurrent-sweep-start]
>>>> 463.070: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 463.070: [CMS-concurrent-reset-start]
>>>> 463.080: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 465.080: [GC [1 CMS-initial-mark: 12854K(21428K)] 32974K(139444K),
>>>> 0.0027070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 465.083: [CMS-concurrent-mark-start]
>>>> 465.096: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 465.096: [CMS-concurrent-preclean-start]
>>>> 465.096: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 465.096: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 470.123:
>>>> [CMS-concurrent-abortable-preclean: 0.723/5.027 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.03 secs]
>>>> 470.124: [GC[YG occupancy: 20440 K (118016 K)]470.124: [Rescan
>>>> (parallel) , 0.0011990 secs]470.125: [weak refs processing, 0.0000130
>>>> secs] [1 CMS-remark: 12854K(21428K)] 33295K(139444K), 0.0012990 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 470.125: [CMS-concurrent-sweep-start]
>>>> 470.127: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 470.127: [CMS-concurrent-reset-start]
>>>> 470.137: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 472.137: [GC [1 CMS-initial-mark: 12854K(21428K)] 33423K(139444K),
>>>> 0.0041330 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 472.141: [CMS-concurrent-mark-start]
>>>> 472.155: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.02 secs]
>>>> 472.155: [CMS-concurrent-preclean-start]
>>>> 472.156: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 472.156: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 477.179:
>>>> [CMS-concurrent-abortable-preclean: 0.618/5.023 secs] [Times:
>>>> user=0.62 sys=0.00, real=5.02 secs]
>>>> 477.179: [GC[YG occupancy: 20889 K (118016 K)]477.179: [Rescan
>>>> (parallel) , 0.0014510 secs]477.180: [weak refs processing, 0.0000090
>>>> secs] [1 CMS-remark: 12854K(21428K)] 33744K(139444K), 0.0015250 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 477.181: [CMS-concurrent-sweep-start]
>>>> 477.183: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 477.183: [CMS-concurrent-reset-start]
>>>> 477.192: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 479.192: [GC [1 CMS-initial-mark: 12854K(21428K)] 33872K(139444K),
>>>> 0.0039730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 479.196: [CMS-concurrent-mark-start]
>>>> 479.209: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 479.209: [CMS-concurrent-preclean-start]
>>>> 479.210: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 479.210: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 484.295:
>>>> [CMS-concurrent-abortable-preclean: 0.757/5.085 secs] [Times:
>>>> user=0.77 sys=0.00, real=5.09 secs]
>>>> 484.295: [GC[YG occupancy: 21583 K (118016 K)]484.295: [Rescan
>>>> (parallel) , 0.0013210 secs]484.297: [weak refs processing, 0.0000150
>>>> secs] [1 CMS-remark: 12854K(21428K)] 34438K(139444K), 0.0014200 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 484.297: [CMS-concurrent-sweep-start]
>>>> 484.298: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 484.298: [CMS-concurrent-reset-start]
>>>> 484.307: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 486.308: [GC [1 CMS-initial-mark: 12854K(21428K)] 34566K(139444K),
>>>> 0.0041800 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 486.312: [CMS-concurrent-mark-start]
>>>> 486.324: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
>>>> sys=0.00, real=0.02 secs]
>>>> 486.324: [CMS-concurrent-preclean-start]
>>>> 486.324: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 486.324: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 491.394:
>>>> [CMS-concurrent-abortable-preclean: 0.565/5.070 secs] [Times:
>>>> user=0.56 sys=0.00, real=5.06 secs]
>>>> 491.394: [GC[YG occupancy: 22032 K (118016 K)]491.395: [Rescan
>>>> (parallel) , 0.0018030 secs]491.396: [weak refs processing, 0.0000090
>>>> secs] [1 CMS-remark: 12854K(21428K)] 34887K(139444K), 0.0018830 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 491.397: [CMS-concurrent-sweep-start]
>>>> 491.398: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 491.398: [CMS-concurrent-reset-start]
>>>> 491.406: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 493.407: [GC [1 CMS-initial-mark: 12854K(21428K)] 35080K(139444K),
>>>> 0.0027620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 493.410: [CMS-concurrent-mark-start]
>>>> 493.420: [CMS-concurrent-mark: 0.010/0.010 secs] [Times: user=0.04
>>>> sys=0.00, real=0.01 secs]
>>>> 493.420: [CMS-concurrent-preclean-start]
>>>> 493.420: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 493.420: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 498.525:
>>>> [CMS-concurrent-abortable-preclean: 0.600/5.106 secs] [Times:
>>>> user=0.61 sys=0.00, real=5.11 secs]
>>>> 498.526: [GC[YG occupancy: 22545 K (118016 K)]498.526: [Rescan
>>>> (parallel) , 0.0019450 secs]498.528: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12854K(21428K)] 35400K(139444K), 0.0020460 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 498.528: [CMS-concurrent-sweep-start]
>>>> 498.530: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 498.530: [CMS-concurrent-reset-start]
>>>> 498.538: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 500.538: [GC [1 CMS-initial-mark: 12854K(21428K)] 35529K(139444K),
>>>> 0.0027790 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 500.541: [CMS-concurrent-mark-start]
>>>> 500.554: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 500.554: [CMS-concurrent-preclean-start]
>>>> 500.554: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 500.554: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 505.616:
>>>> [CMS-concurrent-abortable-preclean: 0.557/5.062 secs] [Times:
>>>> user=0.56 sys=0.00, real=5.06 secs]
>>>> 505.617: [GC[YG occupancy: 22995 K (118016 K)]505.617: [Rescan
>>>> (parallel) , 0.0023440 secs]505.619: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12854K(21428K)] 35850K(139444K), 0.0024280 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 505.619: [CMS-concurrent-sweep-start]
>>>> 505.621: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 505.621: [CMS-concurrent-reset-start]
>>>> 505.629: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 507.630: [GC [1 CMS-initial-mark: 12854K(21428K)] 35978K(139444K),
>>>> 0.0027500 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 507.632: [CMS-concurrent-mark-start]
>>>> 507.645: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>>> sys=0.00, real=0.02 secs]
>>>> 507.645: [CMS-concurrent-preclean-start]
>>>> 507.646: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 507.646: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 512.697:
>>>> [CMS-concurrent-abortable-preclean: 0.562/5.051 secs] [Times:
>>>> user=0.57 sys=0.00, real=5.05 secs]
>>>> 512.697: [GC[YG occupancy: 23484 K (118016 K)]512.697: [Rescan
>>>> (parallel) , 0.0020030 secs]512.699: [weak refs processing, 0.0000090
>>>> secs] [1 CMS-remark: 12854K(21428K)] 36339K(139444K), 0.0020830 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 512.700: [CMS-concurrent-sweep-start]
>>>> 512.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 512.701: [CMS-concurrent-reset-start]
>>>> 512.709: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 514.710: [GC [1 CMS-initial-mark: 12854K(21428K)] 36468K(139444K),
>>>> 0.0028400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 514.713: [CMS-concurrent-mark-start]
>>>> 514.725: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>>> sys=0.00, real=0.02 secs]
>>>> 514.725: [CMS-concurrent-preclean-start]
>>>> 514.725: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 514.725: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 519.800:
>>>> [CMS-concurrent-abortable-preclean: 0.619/5.075 secs] [Times:
>>>> user=0.66 sys=0.00, real=5.07 secs]
>>>> 519.801: [GC[YG occupancy: 25022 K (118016 K)]519.801: [Rescan
>>>> (parallel) , 0.0023950 secs]519.803: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12854K(21428K)] 37877K(139444K), 0.0024980 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>>>> 519.803: [CMS-concurrent-sweep-start]
>>>> 519.805: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 519.805: [CMS-concurrent-reset-start]
>>>> 519.813: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 521.814: [GC [1 CMS-initial-mark: 12854K(21428K)] 38005K(139444K),
>>>> 0.0045520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 521.818: [CMS-concurrent-mark-start]
>>>> 521.833: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 521.833: [CMS-concurrent-preclean-start]
>>>> 521.833: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 521.833: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 526.840:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 526.840: [GC[YG occupancy: 25471 K (118016 K)]526.840: [Rescan
>>>> (parallel) , 0.0024440 secs]526.843: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12854K(21428K)] 38326K(139444K), 0.0025440 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 526.843: [CMS-concurrent-sweep-start]
>>>> 526.845: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 526.845: [CMS-concurrent-reset-start]
>>>> 526.853: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 528.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 38449K(139444K),
>>>> 0.0045550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 528.858: [CMS-concurrent-mark-start]
>>>> 528.872: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 528.872: [CMS-concurrent-preclean-start]
>>>> 528.873: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 528.873: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 533.876:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 533.876: [GC[YG occupancy: 25919 K (118016 K)]533.877: [Rescan
>>>> (parallel) , 0.0028370 secs]533.879: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 38769K(139444K), 0.0029390 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 533.880: [CMS-concurrent-sweep-start]
>>>> 533.882: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 533.882: [CMS-concurrent-reset-start]
>>>> 533.891: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 535.891: [GC [1 CMS-initial-mark: 12849K(21428K)] 38897K(139444K),
>>>> 0.0046460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 535.896: [CMS-concurrent-mark-start]
>>>> 535.910: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 535.910: [CMS-concurrent-preclean-start]
>>>> 535.911: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 535.911: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 540.917:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 540.917: [GC[YG occupancy: 26367 K (118016 K)]540.917: [Rescan
>>>> (parallel) , 0.0025680 secs]540.920: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 39217K(139444K), 0.0026690 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>>> 540.920: [CMS-concurrent-sweep-start]
>>>> 540.922: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 540.922: [CMS-concurrent-reset-start]
>>>> 540.930: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 542.466: [GC [1 CMS-initial-mark: 12849K(21428K)] 39555K(139444K),
>>>> 0.0050040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 542.471: [CMS-concurrent-mark-start]
>>>> 542.486: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.05
>>>> sys=0.00, real=0.02 secs]
>>>> 542.486: [CMS-concurrent-preclean-start]
>>>> 542.486: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 542.486: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 547.491:
>>>> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 547.491: [GC[YG occupancy: 27066 K (118016 K)]547.491: [Rescan
>>>> (parallel) , 0.0024720 secs]547.494: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 39916K(139444K), 0.0025720 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>>>> 547.494: [CMS-concurrent-sweep-start]
>>>> 547.496: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 547.496: [CMS-concurrent-reset-start]
>>>> 547.505: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 549.506: [GC [1 CMS-initial-mark: 12849K(21428K)] 40044K(139444K),
>>>> 0.0048760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 549.511: [CMS-concurrent-mark-start]
>>>> 549.524: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 549.524: [CMS-concurrent-preclean-start]
>>>> 549.525: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 549.525: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 554.530:
>>>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 554.530: [GC[YG occupancy: 27515 K (118016 K)]554.530: [Rescan
>>>> (parallel) , 0.0025270 secs]554.533: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 40364K(139444K), 0.0026190 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>>> 554.533: [CMS-concurrent-sweep-start]
>>>> 554.534: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 554.534: [CMS-concurrent-reset-start]
>>>> 554.542: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 556.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 40493K(139444K),
>>>> 0.0048950 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 556.548: [CMS-concurrent-mark-start]
>>>> 556.562: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 556.562: [CMS-concurrent-preclean-start]
>>>> 556.562: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 556.563: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 561.565:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 561.566: [GC[YG occupancy: 27963 K (118016 K)]561.566: [Rescan
>>>> (parallel) , 0.0025900 secs]561.568: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 40813K(139444K), 0.0026910 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 561.569: [CMS-concurrent-sweep-start]
>>>> 561.570: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 561.570: [CMS-concurrent-reset-start]
>>>> 561.578: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 563.579: [GC [1 CMS-initial-mark: 12849K(21428K)] 40941K(139444K),
>>>> 0.0049390 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 563.584: [CMS-concurrent-mark-start]
>>>> 563.598: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 563.598: [CMS-concurrent-preclean-start]
>>>> 563.598: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 563.598: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 568.693:
>>>> [CMS-concurrent-abortable-preclean: 0.717/5.095 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.09 secs]
>>>> 568.694: [GC[YG occupancy: 28411 K (118016 K)]568.694: [Rescan
>>>> (parallel) , 0.0035750 secs]568.697: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 41261K(139444K), 0.0036740 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>>> 568.698: [CMS-concurrent-sweep-start]
>>>> 568.700: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 568.700: [CMS-concurrent-reset-start]
>>>> 568.709: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 570.709: [GC [1 CMS-initial-mark: 12849K(21428K)] 41389K(139444K),
>>>> 0.0048710 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 570.714: [CMS-concurrent-mark-start]
>>>> 570.729: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 570.729: [CMS-concurrent-preclean-start]
>>>> 570.729: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 570.729: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 575.738:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 575.738: [GC[YG occupancy: 28900 K (118016 K)]575.738: [Rescan
>>>> (parallel) , 0.0036390 secs]575.742: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 41750K(139444K), 0.0037440 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>>> 575.742: [CMS-concurrent-sweep-start]
>>>> 575.744: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 575.744: [CMS-concurrent-reset-start]
>>>> 575.752: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 577.752: [GC [1 CMS-initial-mark: 12849K(21428K)] 41878K(139444K),
>>>> 0.0050100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 577.758: [CMS-concurrent-mark-start]
>>>> 577.772: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 577.772: [CMS-concurrent-preclean-start]
>>>> 577.773: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 577.773: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 582.779:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 582.779: [GC[YG occupancy: 29348 K (118016 K)]582.779: [Rescan
>>>> (parallel) , 0.0026100 secs]582.782: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 42198K(139444K), 0.0027110 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 582.782: [CMS-concurrent-sweep-start]
>>>> 582.784: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 582.784: [CMS-concurrent-reset-start]
>>>> 582.792: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 584.792: [GC [1 CMS-initial-mark: 12849K(21428K)] 42326K(139444K),
>>>> 0.0050510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 584.798: [CMS-concurrent-mark-start]
>>>> 584.812: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 584.812: [CMS-concurrent-preclean-start]
>>>> 584.813: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 584.813: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 589.819:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 589.819: [GC[YG occupancy: 29797 K (118016 K)]589.819: [Rescan
>>>> (parallel) , 0.0039510 secs]589.823: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 42647K(139444K), 0.0040460 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>>>> 589.824: [CMS-concurrent-sweep-start]
>>>> 589.826: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 589.826: [CMS-concurrent-reset-start]
>>>> 589.835: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 591.835: [GC [1 CMS-initial-mark: 12849K(21428K)] 42775K(139444K),
>>>> 0.0050090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 591.840: [CMS-concurrent-mark-start]
>>>> 591.855: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 591.855: [CMS-concurrent-preclean-start]
>>>> 591.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 591.855: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 596.857:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 596.857: [GC[YG occupancy: 31414 K (118016 K)]596.857: [Rescan
>>>> (parallel) , 0.0028500 secs]596.860: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 44264K(139444K), 0.0029480 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 596.861: [CMS-concurrent-sweep-start]
>>>> 596.862: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 596.862: [CMS-concurrent-reset-start]
>>>> 596.870: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 598.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 44392K(139444K),
>>>> 0.0050640 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 598.876: [CMS-concurrent-mark-start]
>>>> 598.890: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 598.890: [CMS-concurrent-preclean-start]
>>>> 598.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 598.891: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 603.897:
>>>> [CMS-concurrent-abortable-preclean: 0.705/5.007 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.01 secs]
>>>> 603.898: [GC[YG occupancy: 32032 K (118016 K)]603.898: [Rescan
>>>> (parallel) , 0.0039660 secs]603.902: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 44882K(139444K), 0.0040680 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>>> 603.902: [CMS-concurrent-sweep-start]
>>>> 603.903: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 603.903: [CMS-concurrent-reset-start]
>>>> 603.912: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 605.912: [GC [1 CMS-initial-mark: 12849K(21428K)] 45010K(139444K),
>>>> 0.0053650 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 605.918: [CMS-concurrent-mark-start]
>>>> 605.932: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 605.932: [CMS-concurrent-preclean-start]
>>>> 605.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 605.932: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 610.939:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 610.940: [GC[YG occupancy: 32481 K (118016 K)]610.940: [Rescan
>>>> (parallel) , 0.0032540 secs]610.943: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 45330K(139444K), 0.0033560 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 610.943: [CMS-concurrent-sweep-start]
>>>> 610.944: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 610.945: [CMS-concurrent-reset-start]
>>>> 610.953: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 612.486: [GC [1 CMS-initial-mark: 12849K(21428K)] 45459K(139444K),
>>>> 0.0055070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 612.492: [CMS-concurrent-mark-start]
>>>> 612.505: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 612.505: [CMS-concurrent-preclean-start]
>>>> 612.506: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 612.506: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 617.511:
>>>> [CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 617.512: [GC[YG occupancy: 32929 K (118016 K)]617.512: [Rescan
>>>> (parallel) , 0.0037500 secs]617.516: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 45779K(139444K), 0.0038560 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 617.516: [CMS-concurrent-sweep-start]
>>>> 617.518: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 617.518: [CMS-concurrent-reset-start]
>>>> 617.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 619.528: [GC [1 CMS-initial-mark: 12849K(21428K)] 45907K(139444K),
>>>> 0.0053320 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 619.533: [CMS-concurrent-mark-start]
>>>> 619.546: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>>> sys=0.00, real=0.02 secs]
>>>> 619.546: [CMS-concurrent-preclean-start]
>>>> 619.547: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 619.547: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 624.552:
>>>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 624.552: [GC[YG occupancy: 33377 K (118016 K)]624.552: [Rescan
>>>> (parallel) , 0.0037290 secs]624.556: [weak refs processing, 0.0000130
>>>> secs] [1 CMS-remark: 12849K(21428K)] 46227K(139444K), 0.0038330 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 624.556: [CMS-concurrent-sweep-start]
>>>> 624.558: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 624.558: [CMS-concurrent-reset-start]
>>>> 624.568: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 626.568: [GC [1 CMS-initial-mark: 12849K(21428K)] 46355K(139444K),
>>>> 0.0054240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 626.574: [CMS-concurrent-mark-start]
>>>> 626.588: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 626.588: [CMS-concurrent-preclean-start]
>>>> 626.588: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 626.588: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 631.592:
>>>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 631.592: [GC[YG occupancy: 33825 K (118016 K)]631.593: [Rescan
>>>> (parallel) , 0.0041600 secs]631.597: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 46675K(139444K), 0.0042650 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 631.597: [CMS-concurrent-sweep-start]
>>>> 631.598: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 631.598: [CMS-concurrent-reset-start]
>>>> 631.607: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 632.495: [GC [1 CMS-initial-mark: 12849K(21428K)] 46839K(139444K),
>>>> 0.0054380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 632.501: [CMS-concurrent-mark-start]
>>>> 632.516: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 632.516: [CMS-concurrent-preclean-start]
>>>> 632.517: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 632.517: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 637.519:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 637.519: [GC[YG occupancy: 34350 K (118016 K)]637.519: [Rescan
>>>> (parallel) , 0.0025310 secs]637.522: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 47200K(139444K), 0.0026540 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 637.522: [CMS-concurrent-sweep-start]
>>>> 637.523: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 637.523: [CMS-concurrent-reset-start]
>>>> 637.532: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 639.532: [GC [1 CMS-initial-mark: 12849K(21428K)] 47328K(139444K),
>>>> 0.0055330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 639.538: [CMS-concurrent-mark-start]
>>>> 639.551: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 639.551: [CMS-concurrent-preclean-start]
>>>> 639.552: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 639.552: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 644.561:
>>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 644.561: [GC[YG occupancy: 34798 K (118016 K)]644.561: [Rescan
>>>> (parallel) , 0.0040620 secs]644.565: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 47648K(139444K), 0.0041610 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 644.566: [CMS-concurrent-sweep-start]
>>>> 644.568: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 644.568: [CMS-concurrent-reset-start]
>>>> 644.577: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 646.577: [GC [1 CMS-initial-mark: 12849K(21428K)] 47776K(139444K),
>>>> 0.0054660 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 646.583: [CMS-concurrent-mark-start]
>>>> 646.596: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 646.596: [CMS-concurrent-preclean-start]
>>>> 646.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 646.597: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 651.678:
>>>> [CMS-concurrent-abortable-preclean: 0.732/5.081 secs] [Times:
>>>> user=0.74 sys=0.00, real=5.08 secs]
>>>> 651.678: [GC[YG occupancy: 35246 K (118016 K)]651.678: [Rescan
>>>> (parallel) , 0.0025920 secs]651.681: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 48096K(139444K), 0.0026910 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 651.681: [CMS-concurrent-sweep-start]
>>>> 651.682: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 651.682: [CMS-concurrent-reset-start]
>>>> 651.690: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 653.691: [GC [1 CMS-initial-mark: 12849K(21428K)] 48224K(139444K),
>>>> 0.0055640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 653.696: [CMS-concurrent-mark-start]
>>>> 653.711: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 653.711: [CMS-concurrent-preclean-start]
>>>> 653.711: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 653.711: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 658.721:
>>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 658.721: [GC[YG occupancy: 35695 K (118016 K)]658.721: [Rescan
>>>> (parallel) , 0.0040160 secs]658.725: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 48545K(139444K), 0.0041130 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 658.725: [CMS-concurrent-sweep-start]
>>>> 658.727: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 658.728: [CMS-concurrent-reset-start]
>>>> 658.737: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 660.737: [GC [1 CMS-initial-mark: 12849K(21428K)] 48673K(139444K),
>>>> 0.0055230 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 660.743: [CMS-concurrent-mark-start]
>>>> 660.756: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 660.756: [CMS-concurrent-preclean-start]
>>>> 660.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 660.757: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 665.767:
>>>> [CMS-concurrent-abortable-preclean: 0.704/5.011 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 665.768: [GC[YG occupancy: 36289 K (118016 K)]665.768: [Rescan
>>>> (parallel) , 0.0033040 secs]665.771: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 49139K(139444K), 0.0034090 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 665.771: [CMS-concurrent-sweep-start]
>>>> 665.773: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 665.773: [CMS-concurrent-reset-start]
>>>> 665.781: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 667.781: [GC [1 CMS-initial-mark: 12849K(21428K)] 49267K(139444K),
>>>> 0.0057830 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 667.787: [CMS-concurrent-mark-start]
>>>> 667.802: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 667.802: [CMS-concurrent-preclean-start]
>>>> 667.802: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 667.802: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 672.809:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 672.810: [GC[YG occupancy: 36737 K (118016 K)]672.810: [Rescan
>>>> (parallel) , 0.0037010 secs]672.813: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 49587K(139444K), 0.0038010 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>>>> 672.814: [CMS-concurrent-sweep-start]
>>>> 672.815: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 672.815: [CMS-concurrent-reset-start]
>>>> 672.824: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 674.824: [GC [1 CMS-initial-mark: 12849K(21428K)] 49715K(139444K),
>>>> 0.0058920 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 674.830: [CMS-concurrent-mark-start]
>>>> 674.845: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 674.845: [CMS-concurrent-preclean-start]
>>>> 674.845: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 674.845: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 679.849:
>>>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 679.850: [GC[YG occupancy: 37185 K (118016 K)]679.850: [Rescan
>>>> (parallel) , 0.0033420 secs]679.853: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 50035K(139444K), 0.0034440 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>>>> 679.853: [CMS-concurrent-sweep-start]
>>>> 679.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 679.855: [CMS-concurrent-reset-start]
>>>> 679.863: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 681.864: [GC [1 CMS-initial-mark: 12849K(21428K)] 50163K(139444K),
>>>> 0.0058780 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 681.870: [CMS-concurrent-mark-start]
>>>> 681.884: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 681.884: [CMS-concurrent-preclean-start]
>>>> 681.884: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 681.884: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 686.890:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 686.891: [GC[YG occupancy: 37634 K (118016 K)]686.891: [Rescan
>>>> (parallel) , 0.0044480 secs]686.895: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 50483K(139444K), 0.0045570 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 686.896: [CMS-concurrent-sweep-start]
>>>> 686.897: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 686.897: [CMS-concurrent-reset-start]
>>>> 686.905: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 688.905: [GC [1 CMS-initial-mark: 12849K(21428K)] 50612K(139444K),
>>>> 0.0058940 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 688.911: [CMS-concurrent-mark-start]
>>>> 688.925: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 688.925: [CMS-concurrent-preclean-start]
>>>> 688.925: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 688.926: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 694.041:
>>>> [CMS-concurrent-abortable-preclean: 0.718/5.115 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.11 secs]
>>>> 694.041: [GC[YG occupancy: 38122 K (118016 K)]694.041: [Rescan
>>>> (parallel) , 0.0028640 secs]694.044: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 50972K(139444K), 0.0029660 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>>>> 694.044: [CMS-concurrent-sweep-start]
>>>> 694.046: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 694.046: [CMS-concurrent-reset-start]
>>>> 694.054: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 696.054: [GC [1 CMS-initial-mark: 12849K(21428K)] 51100K(139444K),
>>>> 0.0060550 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 696.060: [CMS-concurrent-mark-start]
>>>> 696.074: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 696.074: [CMS-concurrent-preclean-start]
>>>> 696.075: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 696.075: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 701.078:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 701.079: [GC[YG occupancy: 38571 K (118016 K)]701.079: [Rescan
>>>> (parallel) , 0.0064210 secs]701.085: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 51421K(139444K), 0.0065220 secs]
>>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>>> 701.085: [CMS-concurrent-sweep-start]
>>>> 701.087: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 701.088: [CMS-concurrent-reset-start]
>>>> 701.097: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 703.097: [GC [1 CMS-initial-mark: 12849K(21428K)] 51549K(139444K),
>>>> 0.0058470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 703.103: [CMS-concurrent-mark-start]
>>>> 703.116: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>>> sys=0.00, real=0.02 secs]
>>>> 703.116: [CMS-concurrent-preclean-start]
>>>> 703.117: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 703.117: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 708.125:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 708.125: [GC[YG occupancy: 39054 K (118016 K)]708.125: [Rescan
>>>> (parallel) , 0.0037190 secs]708.129: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 51904K(139444K), 0.0038220 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>>> 708.129: [CMS-concurrent-sweep-start]
>>>> 708.131: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 708.131: [CMS-concurrent-reset-start]
>>>> 708.139: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 710.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 52032K(139444K),
>>>> 0.0059770 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 710.145: [CMS-concurrent-mark-start]
>>>> 710.158: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 710.158: [CMS-concurrent-preclean-start]
>>>> 710.158: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 710.158: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 715.169:
>>>> [CMS-concurrent-abortable-preclean: 0.705/5.011 secs] [Times:
>>>> user=0.69 sys=0.01, real=5.01 secs]
>>>> 715.169: [GC[YG occupancy: 39503 K (118016 K)]715.169: [Rescan
>>>> (parallel) , 0.0042370 secs]715.173: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 52353K(139444K), 0.0043410 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 715.174: [CMS-concurrent-sweep-start]
>>>> 715.176: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 715.176: [CMS-concurrent-reset-start]
>>>> 715.185: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 717.185: [GC [1 CMS-initial-mark: 12849K(21428K)] 52481K(139444K),
>>>> 0.0060050 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 717.191: [CMS-concurrent-mark-start]
>>>> 717.205: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 717.205: [CMS-concurrent-preclean-start]
>>>> 717.206: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 717.206: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 722.209:
>>>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.00 secs]
>>>> 722.210: [GC[YG occupancy: 40161 K (118016 K)]722.210: [Rescan
>>>> (parallel) , 0.0041630 secs]722.214: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 53011K(139444K), 0.0042630 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 722.214: [CMS-concurrent-sweep-start]
>>>> 722.216: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 722.216: [CMS-concurrent-reset-start]
>>>> 722.226: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 722.521: [GC [1 CMS-initial-mark: 12849K(21428K)] 53099K(139444K),
>>>> 0.0062380 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 722.528: [CMS-concurrent-mark-start]
>>>> 722.544: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.05
>>>> sys=0.01, real=0.02 secs]
>>>> 722.544: [CMS-concurrent-preclean-start]
>>>> 722.544: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 722.544: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 727.558:
>>>> [CMS-concurrent-abortable-preclean: 0.709/5.014 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 727.558: [GC[YG occupancy: 40610 K (118016 K)]727.558: [Rescan
>>>> (parallel) , 0.0041700 secs]727.563: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 53460K(139444K), 0.0042780 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>>>> 727.563: [CMS-concurrent-sweep-start]
>>>> 727.564: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 727.564: [CMS-concurrent-reset-start]
>>>> 727.573: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.02 secs]
>>>> 729.574: [GC [1 CMS-initial-mark: 12849K(21428K)] 53588K(139444K),
>>>> 0.0062700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 729.580: [CMS-concurrent-mark-start]
>>>> 729.595: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.02 secs]
>>>> 729.595: [CMS-concurrent-preclean-start]
>>>> 729.595: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 729.595: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 734.597:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 734.597: [GC[YG occupancy: 41058 K (118016 K)]734.597: [Rescan
>>>> (parallel) , 0.0053870 secs]734.603: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 53908K(139444K), 0.0054870 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>>> 734.603: [CMS-concurrent-sweep-start]
>>>> 734.604: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 734.604: [CMS-concurrent-reset-start]
>>>> 734.614: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 734.877: [GC [1 CMS-initial-mark: 12849K(21428K)] 53908K(139444K),
>>>> 0.0067230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 734.884: [CMS-concurrent-mark-start]
>>>> 734.899: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 734.899: [CMS-concurrent-preclean-start]
>>>> 734.899: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 734.899: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 739.905:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 739.906: [GC[YG occupancy: 41379 K (118016 K)]739.906: [Rescan
>>>> (parallel) , 0.0050680 secs]739.911: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 54228K(139444K), 0.0051690 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>>>> 739.911: [CMS-concurrent-sweep-start]
>>>> 739.912: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 739.912: [CMS-concurrent-reset-start]
>>>> 739.921: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 741.922: [GC [1 CMS-initial-mark: 12849K(21428K)] 54356K(139444K),
>>>> 0.0062880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 741.928: [CMS-concurrent-mark-start]
>>>> 741.942: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 741.942: [CMS-concurrent-preclean-start]
>>>> 741.943: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 741.943: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 747.059:
>>>> [CMS-concurrent-abortable-preclean: 0.711/5.117 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.12 secs]
>>>> 747.060: [GC[YG occupancy: 41827 K (118016 K)]747.060: [Rescan
>>>> (parallel) , 0.0051040 secs]747.065: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 54677K(139444K), 0.0052090 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 747.065: [CMS-concurrent-sweep-start]
>>>> 747.067: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 747.067: [CMS-concurrent-reset-start]
>>>> 747.075: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 749.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 54805K(139444K),
>>>> 0.0063470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 749.082: [CMS-concurrent-mark-start]
>>>> 749.095: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 749.095: [CMS-concurrent-preclean-start]
>>>> 749.096: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 749.096: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 754.175:
>>>> [CMS-concurrent-abortable-preclean: 0.718/5.079 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.08 secs]
>>>> 754.175: [GC[YG occupancy: 42423 K (118016 K)]754.175: [Rescan
>>>> (parallel) , 0.0051290 secs]754.180: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 55273K(139444K), 0.0052290 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>>>> 754.181: [CMS-concurrent-sweep-start]
>>>> 754.182: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 754.182: [CMS-concurrent-reset-start]
>>>> 754.191: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 756.191: [GC [1 CMS-initial-mark: 12849K(21428K)] 55401K(139444K),
>>>> 0.0064020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 756.198: [CMS-concurrent-mark-start]
>>>> 756.212: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 756.212: [CMS-concurrent-preclean-start]
>>>> 756.213: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 756.213: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 761.217:
>>>> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 761.218: [GC[YG occupancy: 42871 K (118016 K)]761.218: [Rescan
>>>> (parallel) , 0.0052310 secs]761.223: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 55721K(139444K), 0.0053300 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>>> 761.223: [CMS-concurrent-sweep-start]
>>>> 761.225: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 761.225: [CMS-concurrent-reset-start]
>>>> 761.234: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 763.234: [GC [1 CMS-initial-mark: 12849K(21428K)] 55849K(139444K),
>>>> 0.0045400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 763.239: [CMS-concurrent-mark-start]
>>>> 763.253: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 763.253: [CMS-concurrent-preclean-start]
>>>> 763.253: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 763.253: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 768.348:
>>>> [CMS-concurrent-abortable-preclean: 0.690/5.095 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.10 secs]
>>>> 768.349: [GC[YG occupancy: 43320 K (118016 K)]768.349: [Rescan
>>>> (parallel) , 0.0045140 secs]768.353: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 56169K(139444K), 0.0046170 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 768.353: [CMS-concurrent-sweep-start]
>>>> 768.356: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 768.356: [CMS-concurrent-reset-start]
>>>> 768.365: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 770.365: [GC [1 CMS-initial-mark: 12849K(21428K)] 56298K(139444K),
>>>> 0.0063950 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 770.372: [CMS-concurrent-mark-start]
>>>> 770.388: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 770.388: [CMS-concurrent-preclean-start]
>>>> 770.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 770.388: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 775.400:
>>>> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 775.401: [GC[YG occupancy: 43768 K (118016 K)]775.401: [Rescan
>>>> (parallel) , 0.0043990 secs]775.405: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 56618K(139444K), 0.0045000 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>>> 775.405: [CMS-concurrent-sweep-start]
>>>> 775.407: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 775.407: [CMS-concurrent-reset-start]
>>>> 775.417: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 777.417: [GC [1 CMS-initial-mark: 12849K(21428K)] 56746K(139444K),
>>>> 0.0064580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 777.423: [CMS-concurrent-mark-start]
>>>> 777.438: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 777.438: [CMS-concurrent-preclean-start]
>>>> 777.439: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 777.439: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 782.448:
>>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 782.448: [GC[YG occupancy: 44321 K (118016 K)]782.448: [Rescan
>>>> (parallel) , 0.0054760 secs]782.454: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 57171K(139444K), 0.0055780 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 782.454: [CMS-concurrent-sweep-start]
>>>> 782.455: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 782.455: [CMS-concurrent-reset-start]
>>>> 782.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 782.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 57235K(139444K),
>>>> 0.0066970 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 782.550: [CMS-concurrent-mark-start]
>>>> 782.567: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 782.567: [CMS-concurrent-preclean-start]
>>>> 782.568: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 782.568: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 787.574:
>>>> [CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 787.574: [GC[YG occupancy: 44746 K (118016 K)]787.574: [Rescan
>>>> (parallel) , 0.0049170 secs]787.579: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 57596K(139444K), 0.0050210 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>>> 787.579: [CMS-concurrent-sweep-start]
>>>> 787.581: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 787.581: [CMS-concurrent-reset-start]
>>>> 787.590: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 789.591: [GC [1 CMS-initial-mark: 12849K(21428K)] 57724K(139444K),
>>>> 0.0066850 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 789.598: [CMS-concurrent-mark-start]
>>>> 789.614: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 789.614: [CMS-concurrent-preclean-start]
>>>> 789.615: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 789.615: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 794.626:
>>>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 794.627: [GC[YG occupancy: 45195 K (118016 K)]794.627: [Rescan
>>>> (parallel) , 0.0056520 secs]794.632: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 58044K(139444K), 0.0057510 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>>> 794.632: [CMS-concurrent-sweep-start]
>>>> 794.634: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 794.634: [CMS-concurrent-reset-start]
>>>> 794.643: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 796.643: [GC [1 CMS-initial-mark: 12849K(21428K)] 58172K(139444K),
>>>> 0.0067410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 796.650: [CMS-concurrent-mark-start]
>>>> 796.666: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 796.666: [CMS-concurrent-preclean-start]
>>>> 796.667: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 796.667: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 801.670:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 801.670: [GC[YG occupancy: 45643 K (118016 K)]801.670: [Rescan
>>>> (parallel) , 0.0043550 secs]801.675: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 58493K(139444K), 0.0044580 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 801.675: [CMS-concurrent-sweep-start]
>>>> 801.677: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 801.677: [CMS-concurrent-reset-start]
>>>> 801.686: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 803.686: [GC [1 CMS-initial-mark: 12849K(21428K)] 58621K(139444K),
>>>> 0.0067250 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 803.693: [CMS-concurrent-mark-start]
>>>> 803.708: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 803.708: [CMS-concurrent-preclean-start]
>>>> 803.709: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 803.709: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 808.717:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 808.717: [GC[YG occupancy: 46091 K (118016 K)]808.717: [Rescan
>>>> (parallel) , 0.0034790 secs]808.720: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 58941K(139444K), 0.0035820 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>>> 808.721: [CMS-concurrent-sweep-start]
>>>> 808.722: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 808.722: [CMS-concurrent-reset-start]
>>>> 808.730: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 810.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 59069K(139444K),
>>>> 0.0067580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 810.738: [CMS-concurrent-mark-start]
>>>> 810.755: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 810.755: [CMS-concurrent-preclean-start]
>>>> 810.755: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 810.755: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 815.823:
>>>> [CMS-concurrent-abortable-preclean: 0.715/5.068 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.06 secs]
>>>> 815.824: [GC[YG occupancy: 46580 K (118016 K)]815.824: [Rescan
>>>> (parallel) , 0.0048490 secs]815.829: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 59430K(139444K), 0.0049600 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>>> 815.829: [CMS-concurrent-sweep-start]
>>>> 815.831: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 815.831: [CMS-concurrent-reset-start]
>>>> 815.840: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 817.840: [GC [1 CMS-initial-mark: 12849K(21428K)] 59558K(139444K),
>>>> 0.0068880 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 817.847: [CMS-concurrent-mark-start]
>>>> 817.864: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 817.864: [CMS-concurrent-preclean-start]
>>>> 817.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 817.865: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 822.868:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>>>> user=0.69 sys=0.01, real=5.00 secs]
>>>> 822.868: [GC[YG occupancy: 47028 K (118016 K)]822.868: [Rescan
>>>> (parallel) , 0.0061120 secs]822.874: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 59878K(139444K), 0.0062150 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>>> 822.874: [CMS-concurrent-sweep-start]
>>>> 822.876: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 822.876: [CMS-concurrent-reset-start]
>>>> 822.885: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 824.885: [GC [1 CMS-initial-mark: 12849K(21428K)] 60006K(139444K),
>>>> 0.0068610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 824.892: [CMS-concurrent-mark-start]
>>>> 824.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 824.908: [CMS-concurrent-preclean-start]
>>>> 824.908: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 824.908: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 829.914:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 829.915: [GC[YG occupancy: 47477 K (118016 K)]829.915: [Rescan
>>>> (parallel) , 0.0034890 secs]829.918: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 60327K(139444K), 0.0035930 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>>> 829.918: [CMS-concurrent-sweep-start]
>>>> 829.920: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 829.920: [CMS-concurrent-reset-start]
>>>> 829.930: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 831.930: [GC [1 CMS-initial-mark: 12849K(21428K)] 60455K(139444K),
>>>> 0.0069040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 831.937: [CMS-concurrent-mark-start]
>>>> 831.953: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 831.953: [CMS-concurrent-preclean-start]
>>>> 831.954: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 831.954: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 836.957:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.00 secs]
>>>> 836.957: [GC[YG occupancy: 47925 K (118016 K)]836.957: [Rescan
>>>> (parallel) , 0.0060440 secs]836.963: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 60775K(139444K), 0.0061520 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>>> 836.964: [CMS-concurrent-sweep-start]
>>>> 836.965: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 836.965: [CMS-concurrent-reset-start]
>>>> 836.974: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 838.974: [GC [1 CMS-initial-mark: 12849K(21428K)] 60903K(139444K),
>>>> 0.0069860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 838.982: [CMS-concurrent-mark-start]
>>>> 838.997: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 838.998: [CMS-concurrent-preclean-start]
>>>> 838.998: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 838.998: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 844.091:
>>>> [CMS-concurrent-abortable-preclean: 0.718/5.093 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.09 secs]
>>>> 844.092: [GC[YG occupancy: 48731 K (118016 K)]844.092: [Rescan
>>>> (parallel) , 0.0052610 secs]844.097: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 61581K(139444K), 0.0053620 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>>> 844.097: [CMS-concurrent-sweep-start]
>>>> 844.099: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 844.099: [CMS-concurrent-reset-start]
>>>> 844.108: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 846.109: [GC [1 CMS-initial-mark: 12849K(21428K)] 61709K(139444K),
>>>> 0.0071980 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 846.116: [CMS-concurrent-mark-start]
>>>> 846.133: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 846.133: [CMS-concurrent-preclean-start]
>>>> 846.134: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 846.134: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 851.137:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 851.137: [GC[YG occupancy: 49180 K (118016 K)]851.137: [Rescan
>>>> (parallel) , 0.0061320 secs]851.143: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 62030K(139444K), 0.0062320 secs]
>>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>>> 851.144: [CMS-concurrent-sweep-start]
>>>> 851.145: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 851.145: [CMS-concurrent-reset-start]
>>>> 851.154: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 853.154: [GC [1 CMS-initial-mark: 12849K(21428K)] 62158K(139444K),
>>>> 0.0071610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 853.162: [CMS-concurrent-mark-start]
>>>> 853.177: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 853.177: [CMS-concurrent-preclean-start]
>>>> 853.178: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 853.178: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 858.181:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 858.181: [GC[YG occupancy: 49628 K (118016 K)]858.181: [Rescan
>>>> (parallel) , 0.0029560 secs]858.184: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 62478K(139444K), 0.0030590 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 858.184: [CMS-concurrent-sweep-start]
>>>> 858.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 858.186: [CMS-concurrent-reset-start]
>>>> 858.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 860.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 62606K(139444K),
>>>> 0.0072070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 860.203: [CMS-concurrent-mark-start]
>>>> 860.219: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 860.219: [CMS-concurrent-preclean-start]
>>>> 860.219: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 860.219: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 865.226:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 865.227: [GC[YG occupancy: 50076 K (118016 K)]865.227: [Rescan
>>>> (parallel) , 0.0066610 secs]865.233: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 62926K(139444K), 0.0067670 secs]
>>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>>> 865.233: [CMS-concurrent-sweep-start]
>>>> 865.235: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 865.235: [CMS-concurrent-reset-start]
>>>> 865.244: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 867.244: [GC [1 CMS-initial-mark: 12849K(21428K)] 63054K(139444K),
>>>> 0.0072490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 867.252: [CMS-concurrent-mark-start]
>>>> 867.267: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 867.267: [CMS-concurrent-preclean-start]
>>>> 867.268: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 867.268: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 872.281:
>>>> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 872.281: [GC[YG occupancy: 50525 K (118016 K)]872.281: [Rescan
>>>> (parallel) , 0.0053780 secs]872.286: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 63375K(139444K), 0.0054790 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>>> 872.287: [CMS-concurrent-sweep-start]
>>>> 872.288: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 872.288: [CMS-concurrent-reset-start]
>>>> 872.296: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 872.572: [GC [1 CMS-initial-mark: 12849K(21428K)] 63439K(139444K),
>>>> 0.0073060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 872.580: [CMS-concurrent-mark-start]
>>>> 872.597: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 872.597: [CMS-concurrent-preclean-start]
>>>> 872.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 872.597: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 877.600:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 877.601: [GC[YG occupancy: 51049 K (118016 K)]877.601: [Rescan
>>>> (parallel) , 0.0063070 secs]877.607: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 63899K(139444K), 0.0064090 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>>> 877.607: [CMS-concurrent-sweep-start]
>>>> 877.609: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 877.609: [CMS-concurrent-reset-start]
>>>> 877.619: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 879.619: [GC [1 CMS-initial-mark: 12849K(21428K)] 64027K(139444K),
>>>> 0.0073320 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 879.626: [CMS-concurrent-mark-start]
>>>> 879.643: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 879.643: [CMS-concurrent-preclean-start]
>>>> 879.644: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 879.644: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 884.657:
>>>> [CMS-concurrent-abortable-preclean: 0.708/5.014 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 884.658: [GC[YG occupancy: 51497 K (118016 K)]884.658: [Rescan
>>>> (parallel) , 0.0056160 secs]884.663: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 64347K(139444K), 0.0057150 secs]
>>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>>> 884.663: [CMS-concurrent-sweep-start]
>>>> 884.665: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 884.665: [CMS-concurrent-reset-start]
>>>> 884.674: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 886.674: [GC [1 CMS-initial-mark: 12849K(21428K)] 64475K(139444K),
>>>> 0.0073420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 886.682: [CMS-concurrent-mark-start]
>>>> 886.698: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 886.698: [CMS-concurrent-preclean-start]
>>>> 886.698: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 886.698: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 891.702:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 891.702: [GC[YG occupancy: 51945 K (118016 K)]891.702: [Rescan
>>>> (parallel) , 0.0070120 secs]891.709: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 64795K(139444K), 0.0071150 secs]
>>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>>> 891.709: [CMS-concurrent-sweep-start]
>>>> 891.711: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 891.711: [CMS-concurrent-reset-start]
>>>> 891.721: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 893.721: [GC [1 CMS-initial-mark: 12849K(21428K)] 64923K(139444K),
>>>> 0.0073880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 893.728: [CMS-concurrent-mark-start]
>>>> 893.745: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 893.745: [CMS-concurrent-preclean-start]
>>>> 893.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 893.745: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 898.852:
>>>> [CMS-concurrent-abortable-preclean: 0.715/5.107 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.10 secs]
>>>> 898.853: [GC[YG occupancy: 53466 K (118016 K)]898.853: [Rescan
>>>> (parallel) , 0.0060600 secs]898.859: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 66315K(139444K), 0.0061640 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>>> 898.859: [CMS-concurrent-sweep-start]
>>>> 898.861: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 898.861: [CMS-concurrent-reset-start]
>>>> 898.870: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 900.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 66444K(139444K),
>>>> 0.0074670 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 900.878: [CMS-concurrent-mark-start]
>>>> 900.895: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 900.895: [CMS-concurrent-preclean-start]
>>>> 900.896: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 900.896: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 905.969:
>>>> [CMS-concurrent-abortable-preclean: 0.716/5.074 secs] [Times:
>>>> user=0.72 sys=0.01, real=5.07 secs]
>>>> 905.969: [GC[YG occupancy: 54157 K (118016 K)]905.970: [Rescan
>>>> (parallel) , 0.0068200 secs]905.976: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 67007K(139444K), 0.0069250 secs]
>>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>>> 905.977: [CMS-concurrent-sweep-start]
>>>> 905.978: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 905.978: [CMS-concurrent-reset-start]
>>>> 905.986: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 907.986: [GC [1 CMS-initial-mark: 12849K(21428K)] 67135K(139444K),
>>>> 0.0076010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 907.994: [CMS-concurrent-mark-start]
>>>> 908.009: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 908.009: [CMS-concurrent-preclean-start]
>>>> 908.010: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 908.010: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 913.013:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.69 sys=0.01, real=5.00 secs]
>>>> 913.013: [GC[YG occupancy: 54606 K (118016 K)]913.013: [Rescan
>>>> (parallel) , 0.0053650 secs]913.018: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 67455K(139444K), 0.0054650 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>>> 913.019: [CMS-concurrent-sweep-start]
>>>> 913.021: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 913.021: [CMS-concurrent-reset-start]
>>>> 913.030: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 915.030: [GC [1 CMS-initial-mark: 12849K(21428K)] 67583K(139444K),
>>>> 0.0076410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 915.038: [CMS-concurrent-mark-start]
>>>> 915.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 915.055: [CMS-concurrent-preclean-start]
>>>> 915.056: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 915.056: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 920.058:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 920.058: [GC[YG occupancy: 55054 K (118016 K)]920.058: [Rescan
>>>> (parallel) , 0.0058380 secs]920.064: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 67904K(139444K), 0.0059420 secs]
>>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>>> 920.064: [CMS-concurrent-sweep-start]
>>>> 920.066: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 920.066: [CMS-concurrent-reset-start]
>>>> 920.075: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.01, real=0.01 secs]
>>>> 922.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 68032K(139444K),
>>>> 0.0075820 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 922.083: [CMS-concurrent-mark-start]
>>>> 922.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 922.098: [CMS-concurrent-preclean-start]
>>>> 922.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 922.099: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 927.102:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 927.102: [GC[YG occupancy: 55502 K (118016 K)]927.102: [Rescan
>>>> (parallel) , 0.0059190 secs]927.108: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 68352K(139444K), 0.0060220 secs]
>>>> [Times: user=0.06 sys=0.01, real=0.01 secs]
>>>> 927.108: [CMS-concurrent-sweep-start]
>>>> 927.110: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 927.110: [CMS-concurrent-reset-start]
>>>> 927.120: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 929.120: [GC [1 CMS-initial-mark: 12849K(21428K)] 68480K(139444K),
>>>> 0.0077620 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 929.128: [CMS-concurrent-mark-start]
>>>> 929.145: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 929.145: [CMS-concurrent-preclean-start]
>>>> 929.145: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 929.145: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 934.237:
>>>> [CMS-concurrent-abortable-preclean: 0.717/5.092 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.09 secs]
>>>> 934.238: [GC[YG occupancy: 55991 K (118016 K)]934.238: [Rescan
>>>> (parallel) , 0.0042660 secs]934.242: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 68841K(139444K), 0.0043660 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>>>> 934.242: [CMS-concurrent-sweep-start]
>>>> 934.244: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 934.244: [CMS-concurrent-reset-start]
>>>> 934.252: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 936.253: [GC [1 CMS-initial-mark: 12849K(21428K)] 68969K(139444K),
>>>> 0.0077340 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 936.261: [CMS-concurrent-mark-start]
>>>> 936.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 936.277: [CMS-concurrent-preclean-start]
>>>> 936.278: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 936.278: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 941.284:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 941.284: [GC[YG occupancy: 56439 K (118016 K)]941.284: [Rescan
>>>> (parallel) , 0.0059460 secs]941.290: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 69289K(139444K), 0.0060470 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.00 secs]
>>>> 941.290: [CMS-concurrent-sweep-start]
>>>> 941.293: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 941.293: [CMS-concurrent-reset-start]
>>>> 941.302: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 943.302: [GC [1 CMS-initial-mark: 12849K(21428K)] 69417K(139444K),
>>>> 0.0077760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 943.310: [CMS-concurrent-mark-start]
>>>> 943.326: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 943.326: [CMS-concurrent-preclean-start]
>>>> 943.327: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 943.327: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 948.340:
>>>> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 948.340: [GC[YG occupancy: 56888 K (118016 K)]948.340: [Rescan
>>>> (parallel) , 0.0047760 secs]948.345: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 69738K(139444K), 0.0048770 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 948.345: [CMS-concurrent-sweep-start]
>>>> 948.347: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 948.347: [CMS-concurrent-reset-start]
>>>> 948.356: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 950.356: [GC [1 CMS-initial-mark: 12849K(21428K)] 69866K(139444K),
>>>> 0.0077750 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 950.364: [CMS-concurrent-mark-start]
>>>> 950.380: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 950.380: [CMS-concurrent-preclean-start]
>>>> 950.380: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 950.380: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 955.384:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 955.384: [GC[YG occupancy: 57336 K (118016 K)]955.384: [Rescan
>>>> (parallel) , 0.0072540 secs]955.392: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 70186K(139444K), 0.0073540 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.00 secs]
>>>> 955.392: [CMS-concurrent-sweep-start]
>>>> 955.394: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 955.394: [CMS-concurrent-reset-start]
>>>> 955.403: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 957.403: [GC [1 CMS-initial-mark: 12849K(21428K)] 70314K(139444K),
>>>> 0.0078120 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 957.411: [CMS-concurrent-mark-start]
>>>> 957.427: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 957.427: [CMS-concurrent-preclean-start]
>>>> 957.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 957.427: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 962.437:
>>>> [CMS-concurrent-abortable-preclean: 0.704/5.010 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 962.437: [GC[YG occupancy: 57889 K (118016 K)]962.437: [Rescan
>>>> (parallel) , 0.0076140 secs]962.445: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 70739K(139444K), 0.0077160 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>>> 962.445: [CMS-concurrent-sweep-start]
>>>> 962.446: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 962.446: [CMS-concurrent-reset-start]
>>>> 962.456: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 962.599: [GC [1 CMS-initial-mark: 12849K(21428K)] 70827K(139444K),
>>>> 0.0081180 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 962.608: [CMS-concurrent-mark-start]
>>>> 962.626: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 962.626: [CMS-concurrent-preclean-start]
>>>> 962.626: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 962.626: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 967.632:
>>>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 967.632: [GC[YG occupancy: 58338 K (118016 K)]967.632: [Rescan
>>>> (parallel) , 0.0061170 secs]967.638: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 71188K(139444K), 0.0062190 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>>> 967.638: [CMS-concurrent-sweep-start]
>>>> 967.640: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 967.640: [CMS-concurrent-reset-start]
>>>> 967.648: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 969.648: [GC [1 CMS-initial-mark: 12849K(21428K)] 71316K(139444K),
>>>> 0.0081110 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 969.656: [CMS-concurrent-mark-start]
>>>> 969.674: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 969.674: [CMS-concurrent-preclean-start]
>>>> 969.674: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 969.674: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 974.677:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 974.677: [GC[YG occupancy: 58786 K (118016 K)]974.677: [Rescan
>>>> (parallel) , 0.0070810 secs]974.685: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 71636K(139444K), 0.0072050 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>>> 974.685: [CMS-concurrent-sweep-start]
>>>> 974.686: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 974.686: [CMS-concurrent-reset-start]
>>>> 974.695: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 976.696: [GC [1 CMS-initial-mark: 12849K(21428K)] 71764K(139444K),
>>>> 0.0080650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 976.704: [CMS-concurrent-mark-start]
>>>> 976.719: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 976.719: [CMS-concurrent-preclean-start]
>>>> 976.719: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 976.719: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 981.727:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>>> user=0.69 sys=0.01, real=5.01 secs]
>>>> 981.727: [GC[YG occupancy: 59235 K (118016 K)]981.727: [Rescan
>>>> (parallel) , 0.0066570 secs]981.734: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 72085K(139444K), 0.0067620 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>>> 981.734: [CMS-concurrent-sweep-start]
>>>> 981.736: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 981.736: [CMS-concurrent-reset-start]
>>>> 981.745: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 983.745: [GC [1 CMS-initial-mark: 12849K(21428K)] 72213K(139444K),
>>>> 0.0081400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 983.753: [CMS-concurrent-mark-start]
>>>> 983.769: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 983.769: [CMS-concurrent-preclean-start]
>>>> 983.769: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 983.769: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 988.840:
>>>> [CMS-concurrent-abortable-preclean: 0.716/5.071 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.07 secs]
>>>> 988.840: [GC[YG occupancy: 59683 K (118016 K)]988.840: [Rescan
>>>> (parallel) , 0.0076020 secs]988.848: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 72533K(139444K), 0.0077100 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>>> 988.848: [CMS-concurrent-sweep-start]
>>>> 988.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 988.850: [CMS-concurrent-reset-start]
>>>> 988.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 990.858: [GC [1 CMS-initial-mark: 12849K(21428K)] 72661K(139444K),
>>>> 0.0081810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 990.867: [CMS-concurrent-mark-start]
>>>> 990.884: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 990.884: [CMS-concurrent-preclean-start]
>>>> 990.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 990.885: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 995.999:
>>>> [CMS-concurrent-abortable-preclean: 0.721/5.114 secs] [Times:
>>>> user=0.73 sys=0.00, real=5.11 secs]
>>>> 995.999: [GC[YG occupancy: 60307 K (118016 K)]995.999: [Rescan
>>>> (parallel) , 0.0058190 secs]996.005: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 73156K(139444K), 0.0059260 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>>> 996.005: [CMS-concurrent-sweep-start]
>>>> 996.007: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 996.007: [CMS-concurrent-reset-start]
>>>> 996.016: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 998.016: [GC [1 CMS-initial-mark: 12849K(21428K)] 73285K(139444K),
>>>> 0.0052760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 998.022: [CMS-concurrent-mark-start]
>>>> 998.038: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 998.038: [CMS-concurrent-preclean-start]
>>>> 998.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 998.039: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1003.048:
>>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1003.048: [GC[YG occupancy: 60755 K (118016 K)]1003.048: [Rescan
>>>> (parallel) , 0.0068040 secs]1003.055: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 73605K(139444K), 0.0069060 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>>> 1003.055: [CMS-concurrent-sweep-start]
>>>> 1003.057: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1003.057: [CMS-concurrent-reset-start]
>>>> 1003.066: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1005.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 73733K(139444K),
>>>> 0.0082200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1005.075: [CMS-concurrent-mark-start]
>>>> 1005.090: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1005.090: [CMS-concurrent-preclean-start]
>>>> 1005.090: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1005.090: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1010.094:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 1010.094: [GC[YG occupancy: 61203 K (118016 K)]1010.094: [Rescan
>>>> (parallel) , 0.0066010 secs]1010.101: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 74053K(139444K), 0.0067120 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.00 secs]
>>>> 1010.101: [CMS-concurrent-sweep-start]
>>>> 1010.103: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1010.103: [CMS-concurrent-reset-start]
>>>> 1010.112: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1012.113: [GC [1 CMS-initial-mark: 12849K(21428K)] 74181K(139444K),
>>>> 0.0083460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1012.121: [CMS-concurrent-mark-start]
>>>> 1012.137: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1012.137: [CMS-concurrent-preclean-start]
>>>> 1012.138: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1012.138: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1017.144:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 1017.144: [GC[YG occupancy: 61651 K (118016 K)]1017.144: [Rescan
>>>> (parallel) , 0.0058810 secs]1017.150: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 74501K(139444K), 0.0059830 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>>> 1017.151: [CMS-concurrent-sweep-start]
>>>> 1017.153: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1017.153: [CMS-concurrent-reset-start]
>>>> 1017.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1019.162: [GC [1 CMS-initial-mark: 12849K(21428K)] 74629K(139444K),
>>>> 0.0083310 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1019.171: [CMS-concurrent-mark-start]
>>>> 1019.187: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1019.187: [CMS-concurrent-preclean-start]
>>>> 1019.187: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1019.187: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1024.261:
>>>> [CMS-concurrent-abortable-preclean: 0.717/5.074 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.07 secs]
>>>> 1024.261: [GC[YG occupancy: 62351 K (118016 K)]1024.262: [Rescan
>>>> (parallel) , 0.0069720 secs]1024.269: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 75200K(139444K), 0.0070750 secs]
>>>> [Times: user=0.08 sys=0.01, real=0.01 secs]
>>>> 1024.269: [CMS-concurrent-sweep-start]
>>>> 1024.270: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1024.270: [CMS-concurrent-reset-start]
>>>> 1024.278: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1026.279: [GC [1 CMS-initial-mark: 12849K(21428K)] 75329K(139444K),
>>>> 0.0086360 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1026.288: [CMS-concurrent-mark-start]
>>>> 1026.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1026.305: [CMS-concurrent-preclean-start]
>>>> 1026.305: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1026.305: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1031.308:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1031.308: [GC[YG occupancy: 62799 K (118016 K)]1031.308: [Rescan
>>>> (parallel) , 0.0069330 secs]1031.315: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 75649K(139444K), 0.0070380 secs]
>>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>>> 1031.315: [CMS-concurrent-sweep-start]
>>>> 1031.316: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1031.316: [CMS-concurrent-reset-start]
>>>> 1031.326: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1033.326: [GC [1 CMS-initial-mark: 12849K(21428K)] 75777K(139444K),
>>>> 0.0085850 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1033.335: [CMS-concurrent-mark-start]
>>>> 1033.350: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1033.350: [CMS-concurrent-preclean-start]
>>>> 1033.351: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1033.351: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1038.357:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.69 sys=0.01, real=5.01 secs]
>>>> 1038.358: [GC[YG occupancy: 63247 K (118016 K)]1038.358: [Rescan
>>>> (parallel) , 0.0071860 secs]1038.365: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 76097K(139444K), 0.0072900 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>>> 1038.365: [CMS-concurrent-sweep-start]
>>>> 1038.367: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1038.367: [CMS-concurrent-reset-start]
>>>> 1038.376: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1040.376: [GC [1 CMS-initial-mark: 12849K(21428K)] 76225K(139444K),
>>>> 0.0085910 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1040.385: [CMS-concurrent-mark-start]
>>>> 1040.401: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1040.401: [CMS-concurrent-preclean-start]
>>>> 1040.401: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1040.401: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1045.411:
>>>> [CMS-concurrent-abortable-preclean: 0.705/5.010 secs] [Times:
>>>> user=0.69 sys=0.01, real=5.01 secs]
>>>> 1045.412: [GC[YG occupancy: 63695 K (118016 K)]1045.412: [Rescan
>>>> (parallel) , 0.0082050 secs]1045.420: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 76545K(139444K), 0.0083110 secs]
>>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>>> 1045.420: [CMS-concurrent-sweep-start]
>>>> 1045.421: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1045.421: [CMS-concurrent-reset-start]
>>>> 1045.430: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1047.430: [GC [1 CMS-initial-mark: 12849K(21428K)] 76673K(139444K),
>>>> 0.0086110 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1047.439: [CMS-concurrent-mark-start]
>>>> 1047.456: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1047.456: [CMS-concurrent-preclean-start]
>>>> 1047.456: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1047.456: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1052.462:
>>>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 1052.462: [GC[YG occupancy: 64144 K (118016 K)]1052.462: [Rescan
>>>> (parallel) , 0.0087770 secs]1052.471: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 76994K(139444K), 0.0088770 secs]
>>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>>> 1052.471: [CMS-concurrent-sweep-start]
>>>> 1052.472: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1052.472: [CMS-concurrent-reset-start]
>>>> 1052.481: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1052.628: [GC [1 CMS-initial-mark: 12849K(21428K)] 77058K(139444K),
>>>> 0.0086170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1052.637: [CMS-concurrent-mark-start]
>>>> 1052.655: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1052.655: [CMS-concurrent-preclean-start]
>>>> 1052.656: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1052.656: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1057.658:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1057.658: [GC[YG occupancy: 64569 K (118016 K)]1057.658: [Rescan
>>>> (parallel) , 0.0072850 secs]1057.665: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 77418K(139444K), 0.0073880 secs]
>>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>>> 1057.666: [CMS-concurrent-sweep-start]
>>>> 1057.668: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1057.668: [CMS-concurrent-reset-start]
>>>> 1057.677: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1059.677: [GC [1 CMS-initial-mark: 12849K(21428K)] 77547K(139444K),
>>>> 0.0086820 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1059.686: [CMS-concurrent-mark-start]
>>>> 1059.703: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1059.703: [CMS-concurrent-preclean-start]
>>>> 1059.703: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1059.703: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1064.712:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1064.712: [GC[YG occupancy: 65017 K (118016 K)]1064.712: [Rescan
>>>> (parallel) , 0.0071630 secs]1064.720: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 77867K(139444K), 0.0072700 secs]
>>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>>> 1064.720: [CMS-concurrent-sweep-start]
>>>> 1064.722: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1064.722: [CMS-concurrent-reset-start]
>>>> 1064.731: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1066.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 77995K(139444K),
>>>> 0.0087640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1066.740: [CMS-concurrent-mark-start]
>>>> 1066.757: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1066.757: [CMS-concurrent-preclean-start]
>>>> 1066.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1066.757: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1071.821:
>>>> [CMS-concurrent-abortable-preclean: 0.714/5.064 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.06 secs]
>>>> 1071.822: [GC[YG occupancy: 65465 K (118016 K)]1071.822: [Rescan
>>>> (parallel) , 0.0056280 secs]1071.827: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 78315K(139444K), 0.0057430 secs]
>>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>>> 1071.828: [CMS-concurrent-sweep-start]
>>>> 1071.830: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1071.830: [CMS-concurrent-reset-start]
>>>> 1071.839: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1073.839: [GC [1 CMS-initial-mark: 12849K(21428K)] 78443K(139444K),
>>>> 0.0087570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1073.848: [CMS-concurrent-mark-start]
>>>> 1073.865: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1073.865: [CMS-concurrent-preclean-start]
>>>> 1073.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1073.865: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1078.868:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1078.868: [GC[YG occupancy: 65914 K (118016 K)]1078.868: [Rescan
>>>> (parallel) , 0.0055280 secs]1078.873: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 78763K(139444K), 0.0056320 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 1078.874: [CMS-concurrent-sweep-start]
>>>> 1078.875: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1078.875: [CMS-concurrent-reset-start]
>>>> 1078.884: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1080.884: [GC [1 CMS-initial-mark: 12849K(21428K)] 78892K(139444K),
>>>> 0.0088520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1080.893: [CMS-concurrent-mark-start]
>>>> 1080.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1080.909: [CMS-concurrent-preclean-start]
>>>> 1080.909: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1080.909: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1086.021:
>>>> [CMS-concurrent-abortable-preclean: 0.714/5.112 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.11 secs]
>>>> 1086.021: [GC[YG occupancy: 66531 K (118016 K)]1086.022: [Rescan
>>>> (parallel) , 0.0075330 secs]1086.029: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 79381K(139444K), 0.0076440 secs]
>>>> [Times: user=0.09 sys=0.01, real=0.01 secs]
>>>> 1086.029: [CMS-concurrent-sweep-start]
>>>> 1086.031: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1086.031: [CMS-concurrent-reset-start]
>>>> 1086.041: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1088.041: [GC [1 CMS-initial-mark: 12849K(21428K)] 79509K(139444K),
>>>> 0.0091350 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1088.050: [CMS-concurrent-mark-start]
>>>> 1088.066: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1088.067: [CMS-concurrent-preclean-start]
>>>> 1088.067: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1088.067: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1093.070:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1093.071: [GC[YG occupancy: 66980 K (118016 K)]1093.071: [Rescan
>>>> (parallel) , 0.0051870 secs]1093.076: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 79830K(139444K), 0.0052930 secs]
>>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>>> 1093.076: [CMS-concurrent-sweep-start]
>>>> 1093.078: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1093.078: [CMS-concurrent-reset-start]
>>>> 1093.087: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1095.088: [GC [1 CMS-initial-mark: 12849K(21428K)] 79958K(139444K),
>>>> 0.0091350 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1095.097: [CMS-concurrent-mark-start]
>>>> 1095.114: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1095.114: [CMS-concurrent-preclean-start]
>>>> 1095.115: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1095.115: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1100.121:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>>>> user=0.69 sys=0.01, real=5.00 secs]
>>>> 1100.121: [GC[YG occupancy: 67428 K (118016 K)]1100.122: [Rescan
>>>> (parallel) , 0.0068510 secs]1100.128: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 80278K(139444K), 0.0069510 secs]
>>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>>> 1100.129: [CMS-concurrent-sweep-start]
>>>> 1100.130: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1100.130: [CMS-concurrent-reset-start]
>>>> 1100.138: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1102.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 80406K(139444K),
>>>> 0.0090760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1102.148: [CMS-concurrent-mark-start]
>>>> 1102.165: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1102.165: [CMS-concurrent-preclean-start]
>>>> 1102.165: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1102.165: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1107.168:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1107.168: [GC[YG occupancy: 67876 K (118016 K)]1107.168: [Rescan
>>>> (parallel) , 0.0076420 secs]1107.176: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 80726K(139444K), 0.0077500 secs]
>>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>>> 1107.176: [CMS-concurrent-sweep-start]
>>>> 1107.178: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1107.178: [CMS-concurrent-reset-start]
>>>> 1107.187: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1109.188: [GC [1 CMS-initial-mark: 12849K(21428K)] 80854K(139444K),
>>>> 0.0091510 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1109.197: [CMS-concurrent-mark-start]
>>>> 1109.214: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1109.214: [CMS-concurrent-preclean-start]
>>>> 1109.214: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1109.214: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1114.290:
>>>> [CMS-concurrent-abortable-preclean: 0.711/5.076 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.07 secs]
>>>> 1114.290: [GC[YG occupancy: 68473 K (118016 K)]1114.290: [Rescan
>>>> (parallel) , 0.0084730 secs]1114.299: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 81322K(139444K), 0.0085810 secs]
>>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>>> 1114.299: [CMS-concurrent-sweep-start]
>>>> 1114.301: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1114.301: [CMS-concurrent-reset-start]
>>>> 1114.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1115.803: [GC [1 CMS-initial-mark: 12849K(21428K)] 81451K(139444K),
>>>> 0.0106050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1115.814: [CMS-concurrent-mark-start]
>>>> 1115.830: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1115.830: [CMS-concurrent-preclean-start]
>>>> 1115.831: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1115.831: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1120.839:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1120.839: [GC[YG occupancy: 68921 K (118016 K)]1120.839: [Rescan
>>>> (parallel) , 0.0088800 secs]1120.848: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 81771K(139444K), 0.0089910 secs]
>>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>>> 1120.848: [CMS-concurrent-sweep-start]
>>>> 1120.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1120.850: [CMS-concurrent-reset-start]
>>>> 1120.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1122.859: [GC [1 CMS-initial-mark: 12849K(21428K)] 81899K(139444K),
>>>> 0.0092280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1122.868: [CMS-concurrent-mark-start]
>>>> 1122.885: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1122.885: [CMS-concurrent-preclean-start]
>>>> 1122.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1122.885: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1127.888:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.69 sys=0.01, real=5.00 secs]
>>>> 1127.888: [GC[YG occupancy: 69369 K (118016 K)]1127.888: [Rescan
>>>> (parallel) , 0.0087740 secs]1127.897: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 82219K(139444K), 0.0088850 secs]
>>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>>> 1127.897: [CMS-concurrent-sweep-start]
>>>> 1127.898: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1127.898: [CMS-concurrent-reset-start]
>>>> 1127.906: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1129.907: [GC [1 CMS-initial-mark: 12849K(21428K)] 82347K(139444K),
>>>> 0.0092280 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1129.916: [CMS-concurrent-mark-start]
>>>> 1129.933: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1129.933: [CMS-concurrent-preclean-start]
>>>> 1129.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1129.934: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1134.938:
>>>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1134.938: [GC[YG occupancy: 69818 K (118016 K)]1134.939: [Rescan
>>>> (parallel) , 0.0078530 secs]1134.946: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 82667K(139444K), 0.0079630 secs]
>>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>>> 1134.947: [CMS-concurrent-sweep-start]
>>>> 1134.948: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1134.948: [CMS-concurrent-reset-start]
>>>> 1134.956: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1136.957: [GC [1 CMS-initial-mark: 12849K(21428K)] 82795K(139444K),
>>>> 0.0092760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1136.966: [CMS-concurrent-mark-start]
>>>> 1136.983: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.01 secs]
>>>> 1136.983: [CMS-concurrent-preclean-start]
>>>> 1136.984: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.01 secs]
>>>> 1136.984: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1141.991:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1141.991: [GC[YG occupancy: 70266 K (118016 K)]1141.991: [Rescan
>>>> (parallel) , 0.0090620 secs]1142.000: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 83116K(139444K), 0.0091700 secs]
>>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>>> 1142.000: [CMS-concurrent-sweep-start]
>>>> 1142.002: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1142.002: [CMS-concurrent-reset-start]
>>>> 1142.011: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1142.657: [GC [1 CMS-initial-mark: 12849K(21428K)] 83390K(139444K),
>>>> 0.0094330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1142.667: [CMS-concurrent-mark-start]
>>>> 1142.685: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1142.685: [CMS-concurrent-preclean-start]
>>>> 1142.686: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1142.686: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1147.688:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1147.688: [GC[YG occupancy: 70901 K (118016 K)]1147.688: [Rescan
>>>> (parallel) , 0.0081170 secs]1147.696: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 83751K(139444K), 0.0082390 secs]
>>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>>> 1147.697: [CMS-concurrent-sweep-start]
>>>> 1147.698: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1147.698: [CMS-concurrent-reset-start]
>>>> 1147.706: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1149.706: [GC [1 CMS-initial-mark: 12849K(21428K)] 83879K(139444K),
>>>> 0.0095560 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1149.716: [CMS-concurrent-mark-start]
>>>> 1149.734: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1149.734: [CMS-concurrent-preclean-start]
>>>> 1149.734: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1149.734: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1154.741:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1154.741: [GC[YG occupancy: 71349 K (118016 K)]1154.741: [Rescan
>>>> (parallel) , 0.0090720 secs]1154.750: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 84199K(139444K), 0.0091780 secs]
>>>> [Times: user=0.10 sys=0.01, real=0.01 secs]
>>>> 1154.750: [CMS-concurrent-sweep-start]
>>>> 1154.752: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1154.752: [CMS-concurrent-reset-start]
>>>> 1154.762: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1155.021: [GC [1 CMS-initial-mark: 12849K(21428K)] 84199K(139444K),
>>>> 0.0094030 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1155.031: [CMS-concurrent-mark-start]
>>>> 1155.047: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1155.047: [CMS-concurrent-preclean-start]
>>>> 1155.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1155.047: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1160.056:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1160.056: [GC[YG occupancy: 71669 K (118016 K)]1160.056: [Rescan
>>>> (parallel) , 0.0056520 secs]1160.062: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 84519K(139444K), 0.0057790 secs]
>>>> [Times: user=0.07 sys=0.00, real=0.00 secs]
>>>> 1160.062: [CMS-concurrent-sweep-start]
>>>> 1160.064: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1160.064: [CMS-concurrent-reset-start]
>>>> 1160.073: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1162.074: [GC [1 CMS-initial-mark: 12849K(21428K)] 84647K(139444K),
>>>> 0.0095040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1162.083: [CMS-concurrent-mark-start]
>>>> 1162.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1162.098: [CMS-concurrent-preclean-start]
>>>> 1162.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1162.099: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1167.102:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1167.102: [GC[YG occupancy: 72118 K (118016 K)]1167.102: [Rescan
>>>> (parallel) , 0.0072180 secs]1167.110: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 84968K(139444K), 0.0073300 secs]
>>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>>> 1167.110: [CMS-concurrent-sweep-start]
>>>> 1167.112: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1167.112: [CMS-concurrent-reset-start]
>>>> 1167.121: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1169.121: [GC [1 CMS-initial-mark: 12849K(21428K)] 85096K(139444K),
>>>> 0.0096940 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1169.131: [CMS-concurrent-mark-start]
>>>> 1169.147: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1169.147: [CMS-concurrent-preclean-start]
>>>> 1169.147: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1169.147: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1174.197:
>>>> [CMS-concurrent-abortable-preclean: 0.720/5.050 secs] [Times:
>>>> user=0.72 sys=0.01, real=5.05 secs]
>>>> 1174.198: [GC[YG occupancy: 72607 K (118016 K)]1174.198: [Rescan
>>>> (parallel) , 0.0064910 secs]1174.204: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 85456K(139444K), 0.0065940 secs]
>>>> [Times: user=0.06 sys=0.01, real=0.01 secs]
>>>> 1174.204: [CMS-concurrent-sweep-start]
>>>> 1174.206: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1174.206: [CMS-concurrent-reset-start]
>>>> 1174.215: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1176.215: [GC [1 CMS-initial-mark: 12849K(21428K)] 85585K(139444K),
>>>> 0.0095940 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1176.225: [CMS-concurrent-mark-start]
>>>> 1176.240: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1176.240: [CMS-concurrent-preclean-start]
>>>> 1176.241: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1176.241: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1181.244:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1181.244: [GC[YG occupancy: 73055 K (118016 K)]1181.244: [Rescan
>>>> (parallel) , 0.0093030 secs]1181.254: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 85905K(139444K), 0.0094040 secs]
>>>> [Times: user=0.09 sys=0.01, real=0.01 secs]
>>>> 1181.254: [CMS-concurrent-sweep-start]
>>>> 1181.256: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1181.256: [CMS-concurrent-reset-start]
>>>> 1181.265: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1183.266: [GC [1 CMS-initial-mark: 12849K(21428K)] 86033K(139444K),
>>>> 0.0096490 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1183.275: [CMS-concurrent-mark-start]
>>>> 1183.293: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
>>>> sys=0.00, real=0.02 secs]
>>>> 1183.293: [CMS-concurrent-preclean-start]
>>>> 1183.294: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1183.294: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1188.301:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1188.301: [GC[YG occupancy: 73503 K (118016 K)]1188.301: [Rescan
>>>> (parallel) , 0.0092610 secs]1188.310: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 86353K(139444K), 0.0093750 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1188.310: [CMS-concurrent-sweep-start]
>>>> 1188.312: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1188.312: [CMS-concurrent-reset-start]
>>>> 1188.320: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1190.321: [GC [1 CMS-initial-mark: 12849K(21428K)] 86481K(139444K),
>>>> 0.0097510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1190.331: [CMS-concurrent-mark-start]
>>>> 1190.347: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1190.347: [CMS-concurrent-preclean-start]
>>>> 1190.347: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1190.347: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1195.359:
>>>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1195.359: [GC[YG occupancy: 73952 K (118016 K)]1195.359: [Rescan
>>>> (parallel) , 0.0093210 secs]1195.368: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 86801K(139444K), 0.0094330 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1195.369: [CMS-concurrent-sweep-start]
>>>> 1195.370: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1195.370: [CMS-concurrent-reset-start]
>>>> 1195.378: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1196.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 88001K(139444K),
>>>> 0.0099870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1196.553: [CMS-concurrent-mark-start]
>>>> 1196.570: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1196.570: [CMS-concurrent-preclean-start]
>>>> 1196.570: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1196.570: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1201.574:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1201.574: [GC[YG occupancy: 75472 K (118016 K)]1201.574: [Rescan
>>>> (parallel) , 0.0096480 secs]1201.584: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 88322K(139444K), 0.0097500 secs]
>>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>>> 1201.584: [CMS-concurrent-sweep-start]
>>>> 1201.586: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1201.586: [CMS-concurrent-reset-start]
>>>> 1201.595: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1202.679: [GC [1 CMS-initial-mark: 12849K(21428K)] 88491K(139444K),
>>>> 0.0099400 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1202.690: [CMS-concurrent-mark-start]
>>>> 1202.708: [CMS-concurrent-mark: 0.016/0.019 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1202.708: [CMS-concurrent-preclean-start]
>>>> 1202.709: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1202.709: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1207.718:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1207.718: [GC[YG occupancy: 76109 K (118016 K)]1207.718: [Rescan
>>>> (parallel) , 0.0096360 secs]1207.727: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 88959K(139444K), 0.0097380 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1207.728: [CMS-concurrent-sweep-start]
>>>> 1207.729: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1207.729: [CMS-concurrent-reset-start]
>>>> 1207.737: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1209.738: [GC [1 CMS-initial-mark: 12849K(21428K)] 89087K(139444K),
>>>> 0.0099440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1209.748: [CMS-concurrent-mark-start]
>>>> 1209.765: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1209.765: [CMS-concurrent-preclean-start]
>>>> 1209.765: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1209.765: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1214.797:
>>>> [CMS-concurrent-abortable-preclean: 0.716/5.031 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.03 secs]
>>>> 1214.797: [GC[YG occupancy: 76557 K (118016 K)]1214.797: [Rescan
>>>> (parallel) , 0.0096280 secs]1214.807: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 89407K(139444K), 0.0097320 secs]
>>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>>> 1214.807: [CMS-concurrent-sweep-start]
>>>> 1214.808: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1214.808: [CMS-concurrent-reset-start]
>>>> 1214.816: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1216.817: [GC [1 CMS-initial-mark: 12849K(21428K)] 89535K(139444K),
>>>> 0.0099640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1216.827: [CMS-concurrent-mark-start]
>>>> 1216.844: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1216.844: [CMS-concurrent-preclean-start]
>>>> 1216.844: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1216.844: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1221.847:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1221.847: [GC[YG occupancy: 77005 K (118016 K)]1221.847: [Rescan
>>>> (parallel) , 0.0061810 secs]1221.854: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 89855K(139444K), 0.0062950 secs]
>>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>>> 1221.854: [CMS-concurrent-sweep-start]
>>>> 1221.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1221.855: [CMS-concurrent-reset-start]
>>>> 1221.864: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1223.865: [GC [1 CMS-initial-mark: 12849K(21428K)] 89983K(139444K),
>>>> 0.0100430 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1223.875: [CMS-concurrent-mark-start]
>>>> 1223.890: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1223.890: [CMS-concurrent-preclean-start]
>>>> 1223.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1223.891: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1228.899:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1228.899: [GC[YG occupancy: 77454 K (118016 K)]1228.899: [Rescan
>>>> (parallel) , 0.0095850 secs]1228.909: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 90304K(139444K), 0.0096960 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1228.909: [CMS-concurrent-sweep-start]
>>>> 1228.911: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1228.911: [CMS-concurrent-reset-start]
>>>> 1228.919: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1230.919: [GC [1 CMS-initial-mark: 12849K(21428K)] 90432K(139444K),
>>>> 0.0101360 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1230.930: [CMS-concurrent-mark-start]
>>>> 1230.946: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1230.946: [CMS-concurrent-preclean-start]
>>>> 1230.947: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1230.947: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1235.952:
>>>> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1235.952: [GC[YG occupancy: 77943 K (118016 K)]1235.952: [Rescan
>>>> (parallel) , 0.0084420 secs]1235.961: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 90793K(139444K), 0.0085450 secs]
>>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>>> 1235.961: [CMS-concurrent-sweep-start]
>>>> 1235.963: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1235.963: [CMS-concurrent-reset-start]
>>>> 1235.972: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1237.973: [GC [1 CMS-initial-mark: 12849K(21428K)] 90921K(139444K),
>>>> 0.0101280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1237.983: [CMS-concurrent-mark-start]
>>>> 1237.998: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1237.998: [CMS-concurrent-preclean-start]
>>>> 1237.999: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1237.999: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1243.008:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1243.008: [GC[YG occupancy: 78391 K (118016 K)]1243.008: [Rescan
>>>> (parallel) , 0.0090510 secs]1243.017: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 91241K(139444K), 0.0091560 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1243.017: [CMS-concurrent-sweep-start]
>>>> 1243.019: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1243.019: [CMS-concurrent-reset-start]
>>>> 1243.027: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1245.027: [GC [1 CMS-initial-mark: 12849K(21428K)] 91369K(139444K),
>>>> 0.0101080 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1245.038: [CMS-concurrent-mark-start]
>>>> 1245.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1245.055: [CMS-concurrent-preclean-start]
>>>> 1245.055: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1245.055: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1250.058:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1250.058: [GC[YG occupancy: 78839 K (118016 K)]1250.058: [Rescan
>>>> (parallel) , 0.0096920 secs]1250.068: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 91689K(139444K), 0.0098040 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1250.068: [CMS-concurrent-sweep-start]
>>>> 1250.070: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1250.070: [CMS-concurrent-reset-start]
>>>> 1250.078: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1252.078: [GC [1 CMS-initial-mark: 12849K(21428K)] 91817K(139444K),
>>>> 0.0102560 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1252.089: [CMS-concurrent-mark-start]
>>>> 1252.105: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1252.105: [CMS-concurrent-preclean-start]
>>>> 1252.106: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1252.106: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1257.113:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1257.113: [GC[YG occupancy: 79288 K (118016 K)]1257.113: [Rescan
>>>> (parallel) , 0.0089920 secs]1257.122: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 92137K(139444K), 0.0090960 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1257.122: [CMS-concurrent-sweep-start]
>>>> 1257.124: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1257.124: [CMS-concurrent-reset-start]
>>>> 1257.133: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1259.134: [GC [1 CMS-initial-mark: 12849K(21428K)] 92266K(139444K),
>>>> 0.0101720 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1259.144: [CMS-concurrent-mark-start]
>>>> 1259.159: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 1259.159: [CMS-concurrent-preclean-start]
>>>> 1259.159: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1259.159: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1264.229:
>>>> [CMS-concurrent-abortable-preclean: 0.716/5.070 secs] [Times:
>>>> user=0.72 sys=0.01, real=5.07 secs]
>>>> 1264.229: [GC[YG occupancy: 79881 K (118016 K)]1264.229: [Rescan
>>>> (parallel) , 0.0101320 secs]1264.240: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 92731K(139444K), 0.0102440 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1264.240: [CMS-concurrent-sweep-start]
>>>> 1264.241: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1264.241: [CMS-concurrent-reset-start]
>>>> 1264.250: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1266.250: [GC [1 CMS-initial-mark: 12849K(21428K)] 92859K(139444K),
>>>> 0.0105180 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1266.261: [CMS-concurrent-mark-start]
>>>> 1266.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1266.277: [CMS-concurrent-preclean-start]
>>>> 1266.277: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1266.277: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1271.285:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1271.285: [GC[YG occupancy: 80330 K (118016 K)]1271.285: [Rescan
>>>> (parallel) , 0.0094600 secs]1271.295: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 93180K(139444K), 0.0095600 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1271.295: [CMS-concurrent-sweep-start]
>>>> 1271.297: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1271.297: [CMS-concurrent-reset-start]
>>>> 1271.306: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1273.306: [GC [1 CMS-initial-mark: 12849K(21428K)] 93308K(139444K),
>>>> 0.0104100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1273.317: [CMS-concurrent-mark-start]
>>>> 1273.334: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1273.334: [CMS-concurrent-preclean-start]
>>>> 1273.335: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1273.335: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1278.341:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1278.341: [GC[YG occupancy: 80778 K (118016 K)]1278.341: [Rescan
>>>> (parallel) , 0.0101320 secs]1278.351: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 93628K(139444K), 0.0102460 secs]
>>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>>> 1278.351: [CMS-concurrent-sweep-start]
>>>> 1278.353: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1278.353: [CMS-concurrent-reset-start]
>>>> 1278.362: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1280.362: [GC [1 CMS-initial-mark: 12849K(21428K)] 93756K(139444K),
>>>> 0.0105680 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1280.373: [CMS-concurrent-mark-start]
>>>> 1280.388: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1280.388: [CMS-concurrent-preclean-start]
>>>> 1280.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1280.388: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1285.400:
>>>> [CMS-concurrent-abortable-preclean: 0.706/5.012 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1285.400: [GC[YG occupancy: 81262 K (118016 K)]1285.400: [Rescan
>>>> (parallel) , 0.0093660 secs]1285.410: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 94111K(139444K), 0.0094820 secs]
>>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>>> 1285.410: [CMS-concurrent-sweep-start]
>>>> 1285.411: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1285.411: [CMS-concurrent-reset-start]
>>>> 1285.420: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1287.420: [GC [1 CMS-initial-mark: 12849K(21428K)] 94240K(139444K),
>>>> 0.0105800 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1287.431: [CMS-concurrent-mark-start]
>>>> 1287.447: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1287.447: [CMS-concurrent-preclean-start]
>>>> 1287.447: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1287.447: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1292.460:
>>>> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1292.460: [GC[YG occupancy: 81710 K (118016 K)]1292.460: [Rescan
>>>> (parallel) , 0.0081130 secs]1292.468: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 94560K(139444K), 0.0082210 secs]
>>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>>> 1292.468: [CMS-concurrent-sweep-start]
>>>> 1292.470: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1292.470: [CMS-concurrent-reset-start]
>>>> 1292.480: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1292.712: [GC [1 CMS-initial-mark: 12849K(21428K)] 94624K(139444K),
>>>> 0.0104870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1292.723: [CMS-concurrent-mark-start]
>>>> 1292.739: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1292.739: [CMS-concurrent-preclean-start]
>>>> 1292.740: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1292.740: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1297.748:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1297.748: [GC[YG occupancy: 82135 K (118016 K)]1297.748: [Rescan
>>>> (parallel) , 0.0106180 secs]1297.759: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 94985K(139444K), 0.0107410 secs]
>>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>>> 1297.759: [CMS-concurrent-sweep-start]
>>>> 1297.760: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1297.761: [CMS-concurrent-reset-start]
>>>> 1297.769: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1299.769: [GC [1 CMS-initial-mark: 12849K(21428K)] 95113K(139444K),
>>>> 0.0105340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1299.780: [CMS-concurrent-mark-start]
>>>> 1299.796: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1299.796: [CMS-concurrent-preclean-start]
>>>> 1299.797: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1299.797: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1304.805:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.01 secs]
>>>> 1304.805: [GC[YG occupancy: 82583 K (118016 K)]1304.806: [Rescan
>>>> (parallel) , 0.0094010 secs]1304.815: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 95433K(139444K), 0.0095140 secs]
>>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>>> 1304.815: [CMS-concurrent-sweep-start]
>>>> 1304.817: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1304.817: [CMS-concurrent-reset-start]
>>>> 1304.827: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1306.827: [GC [1 CMS-initial-mark: 12849K(21428K)] 95561K(139444K),
>>>> 0.0107300 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1306.838: [CMS-concurrent-mark-start]
>>>> 1306.855: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1306.855: [CMS-concurrent-preclean-start]
>>>> 1306.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1306.855: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1311.858:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1311.858: [GC[YG occupancy: 83032 K (118016 K)]1311.858: [Rescan
>>>> (parallel) , 0.0094210 secs]1311.867: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 95882K(139444K), 0.0095360 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1311.868: [CMS-concurrent-sweep-start]
>>>> 1311.869: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1311.869: [CMS-concurrent-reset-start]
>>>> 1311.877: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1313.878: [GC [1 CMS-initial-mark: 12849K(21428K)] 96010K(139444K),
>>>> 0.0107870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1313.889: [CMS-concurrent-mark-start]
>>>> 1313.905: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1313.905: [CMS-concurrent-preclean-start]
>>>> 1313.906: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1313.906: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1318.914:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1318.915: [GC[YG occupancy: 83481 K (118016 K)]1318.915: [Rescan
>>>> (parallel) , 0.0096280 secs]1318.924: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 96331K(139444K), 0.0097340 secs]
>>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>>> 1318.925: [CMS-concurrent-sweep-start]
>>>> 1318.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1318.927: [CMS-concurrent-reset-start]
>>>> 1318.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1320.936: [GC [1 CMS-initial-mark: 12849K(21428K)] 96459K(139444K),
>>>> 0.0106300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1320.947: [CMS-concurrent-mark-start]
>>>> 1320.964: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1320.964: [CMS-concurrent-preclean-start]
>>>> 1320.965: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1320.965: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1325.991:
>>>> [CMS-concurrent-abortable-preclean: 0.717/5.026 secs] [Times:
>>>> user=0.73 sys=0.00, real=5.02 secs]
>>>> 1325.991: [GC[YG occupancy: 84205 K (118016 K)]1325.991: [Rescan
>>>> (parallel) , 0.0097880 secs]1326.001: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 97055K(139444K), 0.0099010 secs]
>>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>>> 1326.001: [CMS-concurrent-sweep-start]
>>>> 1326.003: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1326.003: [CMS-concurrent-reset-start]
>>>> 1326.012: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1328.013: [GC [1 CMS-initial-mark: 12849K(21428K)] 97183K(139444K),
>>>> 0.0109730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1328.024: [CMS-concurrent-mark-start]
>>>> 1328.039: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1328.039: [CMS-concurrent-preclean-start]
>>>> 1328.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1328.039: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1333.043:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 1333.043: [GC[YG occupancy: 84654 K (118016 K)]1333.043: [Rescan
>>>> (parallel) , 0.0110740 secs]1333.054: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 97504K(139444K), 0.0111760 secs]
>>>> [Times: user=0.12 sys=0.01, real=0.02 secs]
>>>> 1333.054: [CMS-concurrent-sweep-start]
>>>> 1333.056: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1333.056: [CMS-concurrent-reset-start]
>>>> 1333.065: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1335.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 97632K(139444K),
>>>> 0.0109300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1335.077: [CMS-concurrent-mark-start]
>>>> 1335.094: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1335.094: [CMS-concurrent-preclean-start]
>>>> 1335.094: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1335.094: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1340.103:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1340.103: [GC[YG occupancy: 85203 K (118016 K)]1340.103: [Rescan
>>>> (parallel) , 0.0109470 secs]1340.114: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 98052K(139444K), 0.0110500 secs]
>>>> [Times: user=0.11 sys=0.01, real=0.02 secs]
>>>> 1340.114: [CMS-concurrent-sweep-start]
>>>> 1340.116: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1340.116: [CMS-concurrent-reset-start]
>>>> 1340.125: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1342.126: [GC [1 CMS-initial-mark: 12849K(21428K)] 98181K(139444K),
>>>> 0.0109170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1342.137: [CMS-concurrent-mark-start]
>>>> 1342.154: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1342.154: [CMS-concurrent-preclean-start]
>>>> 1342.154: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1342.154: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1347.161:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1347.162: [GC[YG occupancy: 85652 K (118016 K)]1347.162: [Rescan
>>>> (parallel) , 0.0075610 secs]1347.169: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 98502K(139444K), 0.0076680 secs]
>>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>>> 1347.169: [CMS-concurrent-sweep-start]
>>>> 1347.171: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1347.172: [CMS-concurrent-reset-start]
>>>> 1347.181: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1349.181: [GC [1 CMS-initial-mark: 12849K(21428K)] 98630K(139444K),
>>>> 0.0109540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1349.192: [CMS-concurrent-mark-start]
>>>> 1349.208: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1349.208: [CMS-concurrent-preclean-start]
>>>> 1349.208: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1349.208: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1354.268:
>>>> [CMS-concurrent-abortable-preclean: 0.723/5.060 secs] [Times:
>>>> user=0.73 sys=0.00, real=5.06 secs]
>>>> 1354.268: [GC[YG occupancy: 86241 K (118016 K)]1354.268: [Rescan
>>>> (parallel) , 0.0099530 secs]1354.278: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 99091K(139444K), 0.0100670 secs]
>>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>>> 1354.278: [CMS-concurrent-sweep-start]
>>>> 1354.280: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1354.280: [CMS-concurrent-reset-start]
>>>> 1354.288: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1356.289: [GC [1 CMS-initial-mark: 12849K(21428K)] 99219K(139444K),
>>>> 0.0111450 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1356.300: [CMS-concurrent-mark-start]
>>>> 1356.316: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1356.316: [CMS-concurrent-preclean-start]
>>>> 1356.317: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1356.317: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1361.322:
>>>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1361.322: [GC[YG occupancy: 86690 K (118016 K)]1361.322: [Rescan
>>>> (parallel) , 0.0097180 secs]1361.332: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 99540K(139444K), 0.0098210 secs]
>>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>>> 1361.332: [CMS-concurrent-sweep-start]
>>>> 1361.333: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1361.333: [CMS-concurrent-reset-start]
>>>> 1361.342: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1363.342: [GC [1 CMS-initial-mark: 12849K(21428K)] 99668K(139444K),
>>>> 0.0110230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1363.354: [CMS-concurrent-mark-start]
>>>> 1363.368: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1363.368: [CMS-concurrent-preclean-start]
>>>> 1363.369: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1363.369: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1368.378:
>>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1368.378: [GC[YG occupancy: 87139 K (118016 K)]1368.378: [Rescan
>>>> (parallel) , 0.0100770 secs]1368.388: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 99989K(139444K), 0.0101900 secs]
>>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>>> 1368.388: [CMS-concurrent-sweep-start]
>>>> 1368.390: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1368.390: [CMS-concurrent-reset-start]
>>>> 1368.398: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1370.399: [GC [1 CMS-initial-mark: 12849K(21428K)] 100117K(139444K),
>>>> 0.0111810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1370.410: [CMS-concurrent-mark-start]
>>>> 1370.426: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1370.426: [CMS-concurrent-preclean-start]
>>>> 1370.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1370.427: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1375.447:
>>>> [CMS-concurrent-abortable-preclean: 0.715/5.020 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.02 secs]
>>>> 1375.447: [GC[YG occupancy: 87588 K (118016 K)]1375.447: [Rescan
>>>> (parallel) , 0.0101690 secs]1375.457: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 100438K(139444K), 0.0102730 secs]
>>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>>> 1375.457: [CMS-concurrent-sweep-start]
>>>> 1375.459: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1375.459: [CMS-concurrent-reset-start]
>>>> 1375.467: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1377.467: [GC [1 CMS-initial-mark: 12849K(21428K)] 100566K(139444K),
>>>> 0.0110760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1377.478: [CMS-concurrent-mark-start]
>>>> 1377.495: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1377.495: [CMS-concurrent-preclean-start]
>>>> 1377.496: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1377.496: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1382.502:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.69 sys=0.01, real=5.00 secs]
>>>> 1382.502: [GC[YG occupancy: 89213 K (118016 K)]1382.502: [Rescan
>>>> (parallel) , 0.0108630 secs]1382.513: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 102063K(139444K), 0.0109700 secs]
>>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>>> 1382.513: [CMS-concurrent-sweep-start]
>>>> 1382.514: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1382.514: [CMS-concurrent-reset-start]
>>>> 1382.523: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1382.743: [GC [1 CMS-initial-mark: 12849K(21428K)] 102127K(139444K),
>>>> 0.0113140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1382.755: [CMS-concurrent-mark-start]
>>>> 1382.773: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1382.773: [CMS-concurrent-preclean-start]
>>>> 1382.774: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1382.774: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1387.777:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1387.777: [GC[YG occupancy: 89638 K (118016 K)]1387.777: [Rescan
>>>> (parallel) , 0.0113310 secs]1387.789: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 102488K(139444K), 0.0114430 secs]
>>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>>> 1387.789: [CMS-concurrent-sweep-start]
>>>> 1387.790: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1387.790: [CMS-concurrent-reset-start]
>>>> 1387.799: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1389.799: [GC [1 CMS-initial-mark: 12849K(21428K)] 102617K(139444K),
>>>> 0.0113540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1389.810: [CMS-concurrent-mark-start]
>>>> 1389.827: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1389.827: [CMS-concurrent-preclean-start]
>>>> 1389.827: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1389.827: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1394.831:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1394.831: [GC[YG occupancy: 90088 K (118016 K)]1394.831: [Rescan
>>>> (parallel) , 0.0103790 secs]1394.841: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 102938K(139444K), 0.0104960 secs]
>>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>>> 1394.842: [CMS-concurrent-sweep-start]
>>>> 1394.844: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1394.844: [CMS-concurrent-reset-start]
>>>> 1394.853: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1396.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 103066K(139444K),
>>>> 0.0114740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1396.865: [CMS-concurrent-mark-start]
>>>> 1396.880: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1396.880: [CMS-concurrent-preclean-start]
>>>> 1396.881: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1396.881: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1401.890:
>>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1401.890: [GC[YG occupancy: 90537 K (118016 K)]1401.891: [Rescan
>>>> (parallel) , 0.0116110 secs]1401.902: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 103387K(139444K), 0.0117240 secs]
>>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>>> 1401.902: [CMS-concurrent-sweep-start]
>>>> 1401.904: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1401.904: [CMS-concurrent-reset-start]
>>>> 1401.914: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1403.914: [GC [1 CMS-initial-mark: 12849K(21428K)] 103515K(139444K),
>>>> 0.0111980 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1403.925: [CMS-concurrent-mark-start]
>>>> 1403.943: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.01 secs]
>>>> 1403.943: [CMS-concurrent-preclean-start]
>>>> 1403.944: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.01 secs]
>>>> 1403.944: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1408.982:
>>>> [CMS-concurrent-abortable-preclean: 0.718/5.038 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.03 secs]
>>>> 1408.982: [GC[YG occupancy: 90986 K (118016 K)]1408.982: [Rescan
>>>> (parallel) , 0.0115260 secs]1408.994: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 103836K(139444K), 0.0116320 secs]
>>>> [Times: user=0.13 sys=0.00, real=0.02 secs]
>>>> 1408.994: [CMS-concurrent-sweep-start]
>>>> 1408.996: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1408.996: [CMS-concurrent-reset-start]
>>>> 1409.005: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1411.005: [GC [1 CMS-initial-mark: 12849K(21428K)] 103964K(139444K),
>>>> 0.0114590 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1411.017: [CMS-concurrent-mark-start]
>>>> 1411.034: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1411.034: [CMS-concurrent-preclean-start]
>>>> 1411.034: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1411.034: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1416.140:
>>>> [CMS-concurrent-abortable-preclean: 0.712/5.105 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.10 secs]
>>>> 1416.140: [GC[YG occupancy: 91476 K (118016 K)]1416.140: [Rescan
>>>> (parallel) , 0.0114950 secs]1416.152: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 104326K(139444K), 0.0116020 secs]
>>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>>> 1416.152: [CMS-concurrent-sweep-start]
>>>> 1416.154: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1416.154: [CMS-concurrent-reset-start]
>>>> 1416.163: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1418.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 104454K(139444K),
>>>> 0.0114040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1418.175: [CMS-concurrent-mark-start]
>>>> 1418.191: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1418.191: [CMS-concurrent-preclean-start]
>>>> 1418.191: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1418.191: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1423.198:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1423.199: [GC[YG occupancy: 91925 K (118016 K)]1423.199: [Rescan
>>>> (parallel) , 0.0105460 secs]1423.209: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 104775K(139444K), 0.0106640 secs]
>>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>>> 1423.209: [CMS-concurrent-sweep-start]
>>>> 1423.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1423.211: [CMS-concurrent-reset-start]
>>>> 1423.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1425.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 104903K(139444K),
>>>> 0.0116300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1425.232: [CMS-concurrent-mark-start]
>>>> 1425.248: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1425.248: [CMS-concurrent-preclean-start]
>>>> 1425.248: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1425.248: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1430.252:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 1430.252: [GC[YG occupancy: 92374 K (118016 K)]1430.252: [Rescan
>>>> (parallel) , 0.0098720 secs]1430.262: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 105224K(139444K), 0.0099750 secs]
>>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>>> 1430.262: [CMS-concurrent-sweep-start]
>>>> 1430.264: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1430.264: [CMS-concurrent-reset-start]
>>>> 1430.273: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1432.274: [GC [1 CMS-initial-mark: 12849K(21428K)] 105352K(139444K),
>>>> 0.0114050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1432.285: [CMS-concurrent-mark-start]
>>>> 1432.301: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1432.301: [CMS-concurrent-preclean-start]
>>>> 1432.301: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1432.301: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1437.304:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 1437.305: [GC[YG occupancy: 92823 K (118016 K)]1437.305: [Rescan
>>>> (parallel) , 0.0115010 secs]1437.316: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 105673K(139444K), 0.0116090 secs]
>>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>>> 1437.316: [CMS-concurrent-sweep-start]
>>>> 1437.319: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1437.319: [CMS-concurrent-reset-start]
>>>> 1437.328: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1439.328: [GC [1 CMS-initial-mark: 12849K(21428K)] 105801K(139444K),
>>>> 0.0115740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1439.340: [CMS-concurrent-mark-start]
>>>> 1439.356: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1439.356: [CMS-concurrent-preclean-start]
>>>> 1439.356: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1439.356: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1444.411:
>>>> [CMS-concurrent-abortable-preclean: 0.715/5.054 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.05 secs]
>>>> 1444.411: [GC[YG occupancy: 93547 K (118016 K)]1444.411: [Rescan
>>>> (parallel) , 0.0072910 secs]1444.418: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 106397K(139444K), 0.0073970 secs]
>>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>>> 1444.419: [CMS-concurrent-sweep-start]
>>>> 1444.420: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1444.420: [CMS-concurrent-reset-start]
>>>> 1444.429: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1446.429: [GC [1 CMS-initial-mark: 12849K(21428K)] 106525K(139444K),
>>>> 0.0117950 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1446.441: [CMS-concurrent-mark-start]
>>>> 1446.457: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1446.457: [CMS-concurrent-preclean-start]
>>>> 1446.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1446.458: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1451.461:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1451.461: [GC[YG occupancy: 93996 K (118016 K)]1451.461: [Rescan
>>>> (parallel) , 0.0120870 secs]1451.473: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 106846K(139444K), 0.0121920 secs]
>>>> [Times: user=0.14 sys=0.00, real=0.02 secs]
>>>> 1451.473: [CMS-concurrent-sweep-start]
>>>> 1451.476: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1451.476: [CMS-concurrent-reset-start]
>>>> 1451.485: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1453.485: [GC [1 CMS-initial-mark: 12849K(21428K)] 106974K(139444K),
>>>> 0.0117990 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1453.497: [CMS-concurrent-mark-start]
>>>> 1453.514: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1453.514: [CMS-concurrent-preclean-start]
>>>> 1453.515: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1453.515: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1458.518:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1458.518: [GC[YG occupancy: 94445 K (118016 K)]1458.518: [Rescan
>>>> (parallel) , 0.0123720 secs]1458.530: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 107295K(139444K), 0.0124750 secs]
>>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>>> 1458.530: [CMS-concurrent-sweep-start]
>>>> 1458.532: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1458.532: [CMS-concurrent-reset-start]
>>>> 1458.540: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1460.541: [GC [1 CMS-initial-mark: 12849K(21428K)] 107423K(139444K),
>>>> 0.0118680 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1460.553: [CMS-concurrent-mark-start]
>>>> 1460.568: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1460.568: [CMS-concurrent-preclean-start]
>>>> 1460.569: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1460.569: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1465.577:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1465.577: [GC[YG occupancy: 94894 K (118016 K)]1465.577: [Rescan
>>>> (parallel) , 0.0119100 secs]1465.589: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 107744K(139444K), 0.0120270 secs]
>>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>>> 1465.590: [CMS-concurrent-sweep-start]
>>>> 1465.591: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1465.591: [CMS-concurrent-reset-start]
>>>> 1465.600: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1467.600: [GC [1 CMS-initial-mark: 12849K(21428K)] 107937K(139444K),
>>>> 0.0120020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1467.612: [CMS-concurrent-mark-start]
>>>> 1467.628: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1467.628: [CMS-concurrent-preclean-start]
>>>> 1467.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1467.628: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1472.636:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1472.637: [GC[YG occupancy: 95408 K (118016 K)]1472.637: [Rescan
>>>> (parallel) , 0.0119090 secs]1472.649: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 108257K(139444K), 0.0120260 secs]
>>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>>> 1472.649: [CMS-concurrent-sweep-start]
>>>> 1472.650: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1472.650: [CMS-concurrent-reset-start]
>>>> 1472.659: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1472.775: [GC [1 CMS-initial-mark: 12849K(21428K)] 108365K(139444K),
>>>> 0.0120260 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1472.787: [CMS-concurrent-mark-start]
>>>> 1472.805: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1472.805: [CMS-concurrent-preclean-start]
>>>> 1472.806: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.01 sys=0.00, real=0.00 secs]
>>>> 1472.806: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1477.808:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 1477.808: [GC[YG occupancy: 95876 K (118016 K)]1477.808: [Rescan
>>>> (parallel) , 0.0099490 secs]1477.818: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 108726K(139444K), 0.0100580 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1477.818: [CMS-concurrent-sweep-start]
>>>> 1477.820: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1477.820: [CMS-concurrent-reset-start]
>>>> 1477.828: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1479.829: [GC [1 CMS-initial-mark: 12849K(21428K)] 108854K(139444K),
>>>> 0.0119550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1479.841: [CMS-concurrent-mark-start]
>>>> 1479.857: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1479.857: [CMS-concurrent-preclean-start]
>>>> 1479.857: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1479.857: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1484.870:
>>>> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1484.870: [GC[YG occupancy: 96325 K (118016 K)]1484.870: [Rescan
>>>> (parallel) , 0.0122870 secs]1484.882: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 109175K(139444K), 0.0123900 secs]
>>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>>> 1484.882: [CMS-concurrent-sweep-start]
>>>> 1484.884: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1484.884: [CMS-concurrent-reset-start]
>>>> 1484.893: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1486.893: [GC [1 CMS-initial-mark: 12849K(21428K)] 109304K(139444K),
>>>> 0.0118470 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>>> 1486.905: [CMS-concurrent-mark-start]
>>>> 1486.921: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.01 secs]
>>>> 1486.921: [CMS-concurrent-preclean-start]
>>>> 1486.921: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1486.921: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1491.968:
>>>> [CMS-concurrent-abortable-preclean: 0.720/5.047 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.05 secs]
>>>> 1491.968: [GC[YG occupancy: 96774 K (118016 K)]1491.968: [Rescan
>>>> (parallel) , 0.0122850 secs]1491.981: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 109624K(139444K), 0.0123880 secs]
>>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>>> 1491.981: [CMS-concurrent-sweep-start]
>>>> 1491.982: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1491.982: [CMS-concurrent-reset-start]
>>>> 1491.991: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1493.991: [GC [1 CMS-initial-mark: 12849K(21428K)] 109753K(139444K),
>>>> 0.0119790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1494.004: [CMS-concurrent-mark-start]
>>>> 1494.019: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1494.019: [CMS-concurrent-preclean-start]
>>>> 1494.019: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1494.019: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1499.100:
>>>> [CMS-concurrent-abortable-preclean: 0.722/5.080 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.08 secs]
>>>> 1499.100: [GC[YG occupancy: 98295 K (118016 K)]1499.100: [Rescan
>>>> (parallel) , 0.0123180 secs]1499.112: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 111145K(139444K), 0.0124240 secs]
>>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>>> 1499.113: [CMS-concurrent-sweep-start]
>>>> 1499.114: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1499.114: [CMS-concurrent-reset-start]
>>>> 1499.123: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1501.123: [GC [1 CMS-initial-mark: 12849K(21428K)] 111274K(139444K),
>>>> 0.0117720 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
>>>> 1501.135: [CMS-concurrent-mark-start]
>>>> 1501.150: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 1501.150: [CMS-concurrent-preclean-start]
>>>> 1501.151: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.01 sys=0.00, real=0.00 secs]
>>>> 1501.151: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1506.172:
>>>> [CMS-concurrent-abortable-preclean: 0.712/5.022 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.02 secs]
>>>> 1506.172: [GC[YG occupancy: 98890 K (118016 K)]1506.173: [Rescan
>>>> (parallel) , 0.0113790 secs]1506.184: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 111740K(139444K), 0.0114830 secs]
>>>> [Times: user=0.13 sys=0.00, real=0.02 secs]
>>>> 1506.184: [CMS-concurrent-sweep-start]
>>>> 1506.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1506.186: [CMS-concurrent-reset-start]
>>>> 1506.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1508.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 111868K(139444K),
>>>> 0.0122930 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1508.208: [CMS-concurrent-mark-start]
>>>> 1508.225: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1508.225: [CMS-concurrent-preclean-start]
>>>> 1508.225: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1508.226: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1513.232:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1513.232: [GC[YG occupancy: 99339 K (118016 K)]1513.232: [Rescan
>>>> (parallel) , 0.0123890 secs]1513.244: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 112189K(139444K), 0.0124930 secs]
>>>> [Times: user=0.14 sys=0.00, real=0.02 secs]
>>>> 1513.245: [CMS-concurrent-sweep-start]
>>>> 1513.246: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1513.246: [CMS-concurrent-reset-start]
>>>> 1513.255: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1515.256: [GC [1 CMS-initial-mark: 12849K(21428K)] 113182K(139444K),
>>>> 0.0123210 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1515.268: [CMS-concurrent-mark-start]
>>>> 1515.285: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1515.285: [CMS-concurrent-preclean-start]
>>>> 1515.285: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1515.285: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1520.290:
>>>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1520.290: [GC[YG occupancy: 100653 K (118016 K)]1520.290: [Rescan
>>>> (parallel) , 0.0125490 secs]1520.303: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 113502K(139444K), 0.0126520 secs]
>>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>>> 1520.303: [CMS-concurrent-sweep-start]
>>>> 1520.304: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1520.304: [CMS-concurrent-reset-start]
>>>> 1520.313: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1522.314: [GC [1 CMS-initial-mark: 12849K(21428K)] 113631K(139444K),
>>>> 0.0118790 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1522.326: [CMS-concurrent-mark-start]
>>>> 1522.343: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.01 secs]
>>>> 1522.343: [CMS-concurrent-preclean-start]
>>>> 1522.343: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1522.343: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1527.350:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1527.350: [GC[YG occupancy: 101102 K (118016 K)]1527.350: [Rescan
>>>> (parallel) , 0.0127460 secs]1527.363: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 113952K(139444K), 0.0128490 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>>> 1527.363: [CMS-concurrent-sweep-start]
>>>> 1527.365: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1527.365: [CMS-concurrent-reset-start]
>>>> 1527.374: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1529.374: [GC [1 CMS-initial-mark: 12849K(21428K)] 114080K(139444K),
>>>> 0.0117550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1529.386: [CMS-concurrent-mark-start]
>>>> 1529.403: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1529.404: [CMS-concurrent-preclean-start]
>>>> 1529.404: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1529.404: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1534.454:
>>>> [CMS-concurrent-abortable-preclean: 0.712/5.050 secs] [Times:
>>>> user=0.70 sys=0.01, real=5.05 secs]
>>>> 1534.454: [GC[YG occupancy: 101591 K (118016 K)]1534.454: [Rescan
>>>> (parallel) , 0.0122680 secs]1534.466: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 114441K(139444K), 0.0123750 secs]
>>>> [Times: user=0.12 sys=0.02, real=0.01 secs]
>>>> 1534.466: [CMS-concurrent-sweep-start]
>>>> 1534.468: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1534.468: [CMS-concurrent-reset-start]
>>>> 1534.478: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1536.478: [GC [1 CMS-initial-mark: 12849K(21428K)] 114570K(139444K),
>>>> 0.0125250 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1536.491: [CMS-concurrent-mark-start]
>>>> 1536.507: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1536.507: [CMS-concurrent-preclean-start]
>>>> 1536.507: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1536.507: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1541.516:
>>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1541.516: [GC[YG occupancy: 102041 K (118016 K)]1541.516: [Rescan
>>>> (parallel) , 0.0088270 secs]1541.525: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 114890K(139444K), 0.0089300 secs]
>>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>>> 1541.525: [CMS-concurrent-sweep-start]
>>>> 1541.527: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1541.527: [CMS-concurrent-reset-start]
>>>> 1541.537: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1543.537: [GC [1 CMS-initial-mark: 12849K(21428K)] 115019K(139444K),
>>>> 0.0124500 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1543.550: [CMS-concurrent-mark-start]
>>>> 1543.566: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1543.566: [CMS-concurrent-preclean-start]
>>>> 1543.567: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1543.567: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1548.578:
>>>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1548.578: [GC[YG occupancy: 102490 K (118016 K)]1548.578: [Rescan
>>>> (parallel) , 0.0100430 secs]1548.588: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 115340K(139444K), 0.0101440 secs]
>>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>>> 1548.588: [CMS-concurrent-sweep-start]
>>>> 1548.589: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1548.589: [CMS-concurrent-reset-start]
>>>> 1548.598: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1550.598: [GC [1 CMS-initial-mark: 12849K(21428K)] 115468K(139444K),
>>>> 0.0125070 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1550.611: [CMS-concurrent-mark-start]
>>>> 1550.627: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1550.627: [CMS-concurrent-preclean-start]
>>>> 1550.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1550.628: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1555.631:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 1555.631: [GC[YG occupancy: 103003 K (118016 K)]1555.631: [Rescan
>>>> (parallel) , 0.0117610 secs]1555.643: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 115853K(139444K), 0.0118770 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>>> 1555.643: [CMS-concurrent-sweep-start]
>>>> 1555.645: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1555.645: [CMS-concurrent-reset-start]
>>>> 1555.655: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1557.655: [GC [1 CMS-initial-mark: 12849K(21428K)] 115981K(139444K),
>>>> 0.0126720 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1557.668: [CMS-concurrent-mark-start]
>>>> 1557.685: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1557.685: [CMS-concurrent-preclean-start]
>>>> 1557.685: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1557.685: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1562.688:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1562.688: [GC[YG occupancy: 103557 K (118016 K)]1562.688: [Rescan
>>>> (parallel) , 0.0121530 secs]1562.700: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 116407K(139444K), 0.0122560 secs]
>>>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>>>> 1562.700: [CMS-concurrent-sweep-start]
>>>> 1562.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1562.701: [CMS-concurrent-reset-start]
>>>> 1562.710: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1562.821: [GC [1 CMS-initial-mark: 12849K(21428K)] 116514K(139444K),
>>>> 0.0127240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1562.834: [CMS-concurrent-mark-start]
>>>> 1562.852: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.01 secs]
>>>> 1562.852: [CMS-concurrent-preclean-start]
>>>> 1562.853: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1562.853: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1567.859:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1567.859: [GC[YG occupancy: 104026 K (118016 K)]1567.859: [Rescan
>>>> (parallel) , 0.0131290 secs]1567.872: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 116876K(139444K), 0.0132470 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>>> 1567.873: [CMS-concurrent-sweep-start]
>>>> 1567.874: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1567.874: [CMS-concurrent-reset-start]
>>>> 1567.883: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1569.883: [GC [1 CMS-initial-mark: 12849K(21428K)] 117103K(139444K),
>>>> 0.0123770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>>> 1569.896: [CMS-concurrent-mark-start]
>>>> 1569.913: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.01 secs]
>>>> 1569.913: [CMS-concurrent-preclean-start]
>>>> 1569.913: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.01 secs]
>>>> 1569.913: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1574.920:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1574.920: [GC[YG occupancy: 104510 K (118016 K)]1574.920: [Rescan
>>>> (parallel) , 0.0122810 secs]1574.932: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 117360K(139444K), 0.0123870 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>>> 1574.933: [CMS-concurrent-sweep-start]
>>>> 1574.935: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1574.935: [CMS-concurrent-reset-start]
>>>> 1574.944: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1575.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 117360K(139444K),
>>>> 0.0121590 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
>>>> 1575.176: [CMS-concurrent-mark-start]
>>>> 1575.193: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1575.193: [CMS-concurrent-preclean-start]
>>>> 1575.193: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.01 secs]
>>>> 1575.193: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1580.197:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.00 secs]
>>>> 1580.197: [GC[YG occupancy: 104831 K (118016 K)]1580.197: [Rescan
>>>> (parallel) , 0.0129860 secs]1580.210: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 117681K(139444K), 0.0130980 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>>> 1580.210: [CMS-concurrent-sweep-start]
>>>> 1580.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1580.211: [CMS-concurrent-reset-start]
>>>> 1580.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1582.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 117809K(139444K),
>>>> 0.0129700 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1582.234: [CMS-concurrent-mark-start]
>>>> 1582.249: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>>>> sys=0.01, real=0.02 secs]
>>>> 1582.249: [CMS-concurrent-preclean-start]
>>>> 1582.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1582.249: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1587.262:
>>>> [CMS-concurrent-abortable-preclean: 0.707/5.013 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1587.262: [GC[YG occupancy: 105280 K (118016 K)]1587.262: [Rescan
>>>> (parallel) , 0.0134570 secs]1587.276: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 118130K(139444K), 0.0135720 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.02 secs]
>>>> 1587.276: [CMS-concurrent-sweep-start]
>>>> 1587.278: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1587.278: [CMS-concurrent-reset-start]
>>>> 1587.287: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1589.287: [GC [1 CMS-initial-mark: 12849K(21428K)] 118258K(139444K),
>>>> 0.0130010 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1589.301: [CMS-concurrent-mark-start]
>>>> 1589.316: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1589.316: [CMS-concurrent-preclean-start]
>>>> 1589.316: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1589.316: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1594.364:
>>>> [CMS-concurrent-abortable-preclean: 0.712/5.048 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.05 secs]
>>>> 1594.365: [GC[YG occupancy: 105770 K (118016 K)]1594.365: [Rescan
>>>> (parallel) , 0.0131190 secs]1594.378: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 118620K(139444K), 0.0132380 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>>> 1594.378: [CMS-concurrent-sweep-start]
>>>> 1594.380: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1594.380: [CMS-concurrent-reset-start]
>>>> 1594.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1596.390: [GC [1 CMS-initial-mark: 12849K(21428K)] 118748K(139444K),
>>>> 0.0130650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1596.403: [CMS-concurrent-mark-start]
>>>> 1596.418: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1596.418: [CMS-concurrent-preclean-start]
>>>> 1596.419: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1596.419: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1601.422:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.69 sys=0.01, real=5.00 secs]
>>>> 1601.422: [GC[YG occupancy: 106219 K (118016 K)]1601.422: [Rescan
>>>> (parallel) , 0.0130310 secs]1601.435: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 119069K(139444K), 0.0131490 secs]
>>>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>>>> 1601.435: [CMS-concurrent-sweep-start]
>>>> 1601.437: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1601.437: [CMS-concurrent-reset-start]
>>>> 1601.446: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1603.447: [GC [1 CMS-initial-mark: 12849K(21428K)] 119197K(139444K),
>>>> 0.0130220 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1603.460: [CMS-concurrent-mark-start]
>>>> 1603.476: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1603.476: [CMS-concurrent-preclean-start]
>>>> 1603.476: [CMS-concurrent-preclean: 0.000/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1603.476: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1608.478:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1608.478: [GC[YG occupancy: 106668 K (118016 K)]1608.479: [Rescan
>>>> (parallel) , 0.0122680 secs]1608.491: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 119518K(139444K), 0.0123790 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>>> 1608.491: [CMS-concurrent-sweep-start]
>>>> 1608.492: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1608.492: [CMS-concurrent-reset-start]
>>>> 1608.501: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1610.502: [GC [1 CMS-initial-mark: 12849K(21428K)] 119646K(139444K),
>>>> 0.0130770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>>> 1610.515: [CMS-concurrent-mark-start]
>>>> 1610.530: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1610.530: [CMS-concurrent-preclean-start]
>>>> 1610.530: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1610.530: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1615.536:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1615.536: [GC[YG occupancy: 107117 K (118016 K)]1615.536: [Rescan
>>>> (parallel) , 0.0125470 secs]1615.549: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 119967K(139444K), 0.0126510 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>>> 1615.549: [CMS-concurrent-sweep-start]
>>>> 1615.551: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1615.551: [CMS-concurrent-reset-start]
>>>> 1615.561: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1617.561: [GC [1 CMS-initial-mark: 12849K(21428K)] 120095K(139444K),
>>>> 0.0129520 secs] [Times: user=0.00 sys=0.00, real=0.02 secs]
>>>> 1617.574: [CMS-concurrent-mark-start]
>>>> 1617.591: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.01 secs]
>>>> 1617.591: [CMS-concurrent-preclean-start]
>>>> 1617.591: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1617.591: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1622.598:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1622.598: [GC[YG occupancy: 107777 K (118016 K)]1622.599: [Rescan
>>>> (parallel) , 0.0140340 secs]1622.613: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 120627K(139444K), 0.0141520 secs]
>>>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>>>> 1622.613: [CMS-concurrent-sweep-start]
>>>> 1622.614: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1622.614: [CMS-concurrent-reset-start]
>>>> 1622.623: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.02 secs]
>>>> 1622.848: [GC [1 CMS-initial-mark: 12849K(21428K)] 120691K(139444K),
>>>> 0.0133410 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1622.861: [CMS-concurrent-mark-start]
>>>> 1622.878: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1622.878: [CMS-concurrent-preclean-start]
>>>> 1622.879: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1622.879: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1627.941:
>>>> [CMS-concurrent-abortable-preclean: 0.656/5.062 secs] [Times:
>>>> user=0.65 sys=0.00, real=5.06 secs]
>>>> 1627.941: [GC[YG occupancy: 108202 K (118016 K)]1627.941: [Rescan
>>>> (parallel) , 0.0135120 secs]1627.955: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 121052K(139444K), 0.0136620 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.02 secs]
>>>> 1627.955: [CMS-concurrent-sweep-start]
>>>> 1627.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1627.956: [CMS-concurrent-reset-start]
>>>> 1627.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1629.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 121180K(139444K),
>>>> 0.0133770 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1629.979: [CMS-concurrent-mark-start]
>>>> 1629.995: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1629.995: [CMS-concurrent-preclean-start]
>>>> 1629.996: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1629.996: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1634.998:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.69 sys=0.00, real=5.00 secs]
>>>> 1634.999: [GC[YG occupancy: 108651 K (118016 K)]1634.999: [Rescan
>>>> (parallel) , 0.0134300 secs]1635.012: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 121501K(139444K), 0.0135530 secs]
>>>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>>>> 1635.012: [CMS-concurrent-sweep-start]
>>>> 1635.014: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1635.014: [CMS-concurrent-reset-start]
>>>> 1635.023: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1637.023: [GC [1 CMS-initial-mark: 12849K(21428K)] 121629K(139444K),
>>>> 0.0127330 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>>> 1637.036: [CMS-concurrent-mark-start]
>>>> 1637.053: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1637.054: [CMS-concurrent-preclean-start]
>>>> 1637.054: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1637.054: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1642.062:
>>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1642.062: [GC[YG occupancy: 109100 K (118016 K)]1642.062: [Rescan
>>>> (parallel) , 0.0124310 secs]1642.075: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 121950K(139444K), 0.0125510 secs]
>>>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>>>> 1642.075: [CMS-concurrent-sweep-start]
>>>> 1642.077: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1642.077: [CMS-concurrent-reset-start]
>>>> 1642.086: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1644.087: [GC [1 CMS-initial-mark: 12849K(21428K)] 122079K(139444K),
>>>> 0.0134300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1644.100: [CMS-concurrent-mark-start]
>>>> 1644.116: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1644.116: [CMS-concurrent-preclean-start]
>>>> 1644.116: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1644.116: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1649.125:
>>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1649.126: [GC[YG occupancy: 109549 K (118016 K)]1649.126: [Rescan
>>>> (parallel) , 0.0126870 secs]1649.138: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 122399K(139444K), 0.0128010 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>>> 1649.139: [CMS-concurrent-sweep-start]
>>>> 1649.141: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1649.141: [CMS-concurrent-reset-start]
>>>> 1649.150: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1651.150: [GC [1 CMS-initial-mark: 12849K(21428K)] 122528K(139444K),
>>>> 0.0134790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1651.164: [CMS-concurrent-mark-start]
>>>> 1651.179: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1651.179: [CMS-concurrent-preclean-start]
>>>> 1651.179: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1651.179: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1656.254:
>>>> [CMS-concurrent-abortable-preclean: 0.722/5.074 secs] [Times:
>>>> user=0.71 sys=0.01, real=5.07 secs]
>>>> 1656.254: [GC[YG occupancy: 110039 K (118016 K)]1656.254: [Rescan
>>>> (parallel) , 0.0092110 secs]1656.263: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 122889K(139444K), 0.0093170 secs]
>>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>>> 1656.263: [CMS-concurrent-sweep-start]
>>>> 1656.266: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1656.266: [CMS-concurrent-reset-start]
>>>> 1656.275: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1658.275: [GC [1 CMS-initial-mark: 12849K(21428K)] 123017K(139444K),
>>>> 0.0134150 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1658.289: [CMS-concurrent-mark-start]
>>>> 1658.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1658.306: [CMS-concurrent-preclean-start]
>>>> 1658.306: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1658.306: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1663.393:
>>>> [CMS-concurrent-abortable-preclean: 0.711/5.087 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.08 secs]
>>>> 1663.393: [GC[YG occupancy: 110488 K (118016 K)]1663.393: [Rescan
>>>> (parallel) , 0.0132450 secs]1663.406: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 123338K(139444K), 0.0133600 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.02 secs]
>>>> 1663.407: [CMS-concurrent-sweep-start]
>>>> 1663.409: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1663.409: [CMS-concurrent-reset-start]
>>>> 1663.418: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1665.418: [GC [1 CMS-initial-mark: 12849K(21428K)] 123467K(139444K),
>>>> 0.0135570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1665.432: [CMS-concurrent-mark-start]
>>>> 1665.447: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1665.447: [CMS-concurrent-preclean-start]
>>>> 1665.448: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1665.448: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1670.457:
>>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1670.457: [GC[YG occupancy: 110937 K (118016 K)]1670.457: [Rescan
>>>> (parallel) , 0.0142820 secs]1670.471: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 123787K(139444K), 0.0144010 secs]
>>>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>>>> 1670.472: [CMS-concurrent-sweep-start]
>>>> 1670.473: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1670.473: [CMS-concurrent-reset-start]
>>>> 1670.482: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1672.482: [GC [1 CMS-initial-mark: 12849K(21428K)] 123916K(139444K),
>>>> 0.0136110 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>>> 1672.496: [CMS-concurrent-mark-start]
>>>> 1672.513: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1672.513: [CMS-concurrent-preclean-start]
>>>> 1672.513: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1672.513: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1677.530:
>>>> [CMS-concurrent-abortable-preclean: 0.711/5.017 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.02 secs]
>>>> 1677.530: [GC[YG occupancy: 111387 K (118016 K)]1677.530: [Rescan
>>>> (parallel) , 0.0129210 secs]1677.543: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 124236K(139444K), 0.0130360 secs]
>>>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>>>> 1677.543: [CMS-concurrent-sweep-start]
>>>> 1677.545: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1677.545: [CMS-concurrent-reset-start]
>>>> 1677.554: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1679.554: [GC [1 CMS-initial-mark: 12849K(21428K)] 124365K(139444K),
>>>> 0.0125140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1679.567: [CMS-concurrent-mark-start]
>>>> 1679.584: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1679.584: [CMS-concurrent-preclean-start]
>>>> 1679.584: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1679.584: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1684.631:
>>>> [CMS-concurrent-abortable-preclean: 0.714/5.047 secs] [Times:
>>>> user=0.72 sys=0.00, real=5.04 secs]
>>>> 1684.631: [GC[YG occupancy: 112005 K (118016 K)]1684.631: [Rescan
>>>> (parallel) , 0.0146760 secs]1684.646: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 124855K(139444K), 0.0147930 secs]
>>>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>>>> 1684.646: [CMS-concurrent-sweep-start]
>>>> 1684.648: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1684.648: [CMS-concurrent-reset-start]
>>>> 1684.656: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1686.656: [GC [1 CMS-initial-mark: 12849K(21428K)] 125048K(139444K),
>>>> 0.0138340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1686.670: [CMS-concurrent-mark-start]
>>>> 1686.686: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1686.686: [CMS-concurrent-preclean-start]
>>>> 1686.687: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1686.687: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1691.689:
>>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1691.689: [GC[YG occupancy: 112518 K (118016 K)]1691.689: [Rescan
>>>> (parallel) , 0.0142600 secs]1691.703: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 12849K(21428K)] 125368K(139444K), 0.0143810 secs]
>>>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>>>> 1691.703: [CMS-concurrent-sweep-start]
>>>> 1691.705: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1691.705: [CMS-concurrent-reset-start]
>>>> 1691.714: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1693.714: [GC [1 CMS-initial-mark: 12849K(21428K)] 125497K(139444K),
>>>> 0.0126710 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1693.727: [CMS-concurrent-mark-start]
>>>> 1693.744: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1693.744: [CMS-concurrent-preclean-start]
>>>> 1693.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1693.745: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1698.747:
>>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1698.748: [GC[YG occupancy: 112968 K (118016 K)]1698.748: [Rescan
>>>> (parallel) , 0.0147370 secs]1698.762: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 125818K(139444K), 0.0148490 secs]
>>>> [Times: user=0.17 sys=0.00, real=0.01 secs]
>>>> 1698.763: [CMS-concurrent-sweep-start]
>>>> 1698.764: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1698.764: [CMS-concurrent-reset-start]
>>>> 1698.773: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1700.773: [GC [1 CMS-initial-mark: 12849K(21428K)] 125946K(139444K),
>>>> 0.0128810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1700.786: [CMS-concurrent-mark-start]
>>>> 1700.804: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1700.804: [CMS-concurrent-preclean-start]
>>>> 1700.804: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1700.804: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1705.810:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1705.810: [GC[YG occupancy: 113417 K (118016 K)]1705.810: [Rescan
>>>> (parallel) , 0.0146750 secs]1705.825: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 126267K(139444K), 0.0147760 secs]
>>>> [Times: user=0.17 sys=0.00, real=0.02 secs]
>>>> 1705.825: [CMS-concurrent-sweep-start]
>>>> 1705.827: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1705.827: [CMS-concurrent-reset-start]
>>>> 1705.836: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1707.836: [GC [1 CMS-initial-mark: 12849K(21428K)] 126395K(139444K),
>>>> 0.0137570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1707.850: [CMS-concurrent-mark-start]
>>>> 1707.866: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1707.866: [CMS-concurrent-preclean-start]
>>>> 1707.867: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1707.867: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1712.878:
>>>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1712.878: [GC[YG occupancy: 113866 K (118016 K)]1712.878: [Rescan
>>>> (parallel) , 0.0116340 secs]1712.890: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 126716K(139444K), 0.0117350 secs]
>>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>>> 1712.890: [CMS-concurrent-sweep-start]
>>>> 1712.893: [CMS-concurrent-sweep: 0.002/0.003 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1712.893: [CMS-concurrent-reset-start]
>>>> 1712.902: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1714.902: [GC [1 CMS-initial-mark: 12849K(21428K)] 126984K(139444K),
>>>> 0.0134590 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>>> 1714.915: [CMS-concurrent-mark-start]
>>>> 1714.933: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1714.933: [CMS-concurrent-preclean-start]
>>>> 1714.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1714.934: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1719.940:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.00 secs]
>>>> 1719.940: [GC[YG occupancy: 114552 K (118016 K)]1719.940: [Rescan
>>>> (parallel) , 0.0141320 secs]1719.955: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 127402K(139444K), 0.0142280 secs]
>>>> [Times: user=0.16 sys=0.01, real=0.02 secs]
>>>> 1719.955: [CMS-concurrent-sweep-start]
>>>> 1719.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1719.956: [CMS-concurrent-reset-start]
>>>> 1719.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1721.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 127530K(139444K),
>>>> 0.0139120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1721.980: [CMS-concurrent-mark-start]
>>>> 1721.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1721.996: [CMS-concurrent-preclean-start]
>>>> 1721.997: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1721.997: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1727.010:
>>>> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
>>>> user=0.71 sys=0.00, real=5.01 secs]
>>>> 1727.010: [GC[YG occupancy: 115000 K (118016 K)]1727.010: [Rescan
>>>> (parallel) , 0.0123190 secs]1727.023: [weak refs processing, 0.0000130
>>>> secs] [1 CMS-remark: 12849K(21428K)] 127850K(139444K), 0.0124420 secs]
>>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>>> 1727.023: [CMS-concurrent-sweep-start]
>>>> 1727.024: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1727.024: [CMS-concurrent-reset-start]
>>>> 1727.033: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1729.034: [GC [1 CMS-initial-mark: 12849K(21428K)] 127978K(139444K),
>>>> 0.0129330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1729.047: [CMS-concurrent-mark-start]
>>>> 1729.064: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1729.064: [CMS-concurrent-preclean-start]
>>>> 1729.064: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1729.064: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1734.075:
>>>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1734.075: [GC[YG occupancy: 115449 K (118016 K)]1734.075: [Rescan
>>>> (parallel) , 0.0131600 secs]1734.088: [weak refs processing, 0.0000130
>>>> secs] [1 CMS-remark: 12849K(21428K)] 128298K(139444K), 0.0132810 secs]
>>>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>>>> 1734.089: [CMS-concurrent-sweep-start]
>>>> 1734.091: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1734.091: [CMS-concurrent-reset-start]
>>>> 1734.100: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1736.100: [GC [1 CMS-initial-mark: 12849K(21428K)] 128427K(139444K),
>>>> 0.0141000 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>>> 1736.115: [CMS-concurrent-mark-start]
>>>> 1736.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1736.131: [CMS-concurrent-preclean-start]
>>>> 1736.131: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1736.131: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1741.139:
>>>> [CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.01 secs]
>>>> 1741.139: [GC[YG occupancy: 115897 K (118016 K)]1741.139: [Rescan
>>>> (parallel) , 0.0146880 secs]1741.154: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 12849K(21428K)] 128747K(139444K), 0.0148020 secs]
>>>> [Times: user=0.17 sys=0.00, real=0.02 secs]
>>>> 1741.154: [CMS-concurrent-sweep-start]
>>>> 1741.156: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1741.156: [CMS-concurrent-reset-start]
>>>> 1741.165: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1742.898: [GC [1 CMS-initial-mark: 12849K(21428K)] 129085K(139444K),
>>>> 0.0144050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1742.913: [CMS-concurrent-mark-start]
>>>> 1742.931: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1742.931: [CMS-concurrent-preclean-start]
>>>> 1742.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1742.932: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1748.016:
>>>> [CMS-concurrent-abortable-preclean: 0.728/5.084 secs] [Times:
>>>> user=0.73 sys=0.00, real=5.09 secs]
>>>> 1748.016: [GC[YG occupancy: 116596 K (118016 K)]1748.016: [Rescan
>>>> (parallel) , 0.0149950 secs]1748.031: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 129446K(139444K), 0.0150970 secs]
>>>> [Times: user=0.17 sys=0.00, real=0.01 secs]
>>>> 1748.031: [CMS-concurrent-sweep-start]
>>>> 1748.033: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1748.033: [CMS-concurrent-reset-start]
>>>> 1748.041: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1750.042: [GC [1 CMS-initial-mark: 12849K(21428K)] 129574K(139444K),
>>>> 0.0141840 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1750.056: [CMS-concurrent-mark-start]
>>>> 1750.073: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1750.073: [CMS-concurrent-preclean-start]
>>>> 1750.074: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1750.074: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1755.080:
>>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>>> user=0.70 sys=0.00, real=5.00 secs]
>>>> 1755.080: [GC[YG occupancy: 117044 K (118016 K)]1755.080: [Rescan
>>>> (parallel) , 0.0155560 secs]1755.096: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 12849K(21428K)] 129894K(139444K), 0.0156580 secs]
>>>> [Times: user=0.17 sys=0.00, real=0.02 secs]
>>>> 1755.096: [CMS-concurrent-sweep-start]
>>>> 1755.097: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1755.097: [CMS-concurrent-reset-start]
>>>> 1755.105: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1756.660: [GC 1756.660: [ParNew: 117108K->482K(118016K), 0.0081410
>>>> secs] 129958K->24535K(144568K), 0.0083030 secs] [Times: user=0.05
>>>> sys=0.01, real=0.01 secs]
>>>> 1756.668: [GC [1 CMS-initial-mark: 24053K(26552K)] 24599K(144568K),
>>>> 0.0015280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1756.670: [CMS-concurrent-mark-start]
>>>> 1756.688: [CMS-concurrent-mark: 0.016/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1756.688: [CMS-concurrent-preclean-start]
>>>> 1756.689: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1756.689: [GC[YG occupancy: 546 K (118016 K)]1756.689: [Rescan
>>>> (parallel) , 0.0018170 secs]1756.691: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(26552K)] 24599K(144568K), 0.0019050 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1756.691: [CMS-concurrent-sweep-start]
>>>> 1756.694: [CMS-concurrent-sweep: 0.004/0.004 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1756.694: [CMS-concurrent-reset-start]
>>>> 1756.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1758.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 25372K(158108K),
>>>> 0.0014030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1758.705: [CMS-concurrent-mark-start]
>>>> 1758.720: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>>> sys=0.00, real=0.01 secs]
>>>> 1758.720: [CMS-concurrent-preclean-start]
>>>> 1758.720: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.01 sys=0.00, real=0.00 secs]
>>>> 1758.721: [GC[YG occupancy: 1319 K (118016 K)]1758.721: [Rescan
>>>> (parallel) , 0.0014940 secs]1758.722: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 25372K(158108K), 0.0015850 secs]
>>>> [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1758.722: [CMS-concurrent-sweep-start]
>>>> 1758.726: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1758.726: [CMS-concurrent-reset-start]
>>>> 1758.735: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1760.735: [GC [1 CMS-initial-mark: 24053K(40092K)] 25565K(158108K),
>>>> 0.0014530 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1760.737: [CMS-concurrent-mark-start]
>>>> 1760.755: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1760.755: [CMS-concurrent-preclean-start]
>>>> 1760.755: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1760.756: [GC[YG occupancy: 1512 K (118016 K)]1760.756: [Rescan
>>>> (parallel) , 0.0014970 secs]1760.757: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 25565K(158108K), 0.0015980 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1760.757: [CMS-concurrent-sweep-start]
>>>> 1760.761: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1760.761: [CMS-concurrent-reset-start]
>>>> 1760.770: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1762.770: [GC [1 CMS-initial-mark: 24053K(40092K)] 25693K(158108K),
>>>> 0.0013680 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1762.772: [CMS-concurrent-mark-start]
>>>> 1762.788: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1762.788: [CMS-concurrent-preclean-start]
>>>> 1762.788: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1762.788: [GC[YG occupancy: 1640 K (118016 K)]1762.789: [Rescan
>>>> (parallel) , 0.0020360 secs]1762.791: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 25693K(158108K), 0.0021450 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1762.791: [CMS-concurrent-sweep-start]
>>>> 1762.794: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1762.794: [CMS-concurrent-reset-start]
>>>> 1762.803: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1764.804: [GC [1 CMS-initial-mark: 24053K(40092K)] 26747K(158108K),
>>>> 0.0014620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1764.805: [CMS-concurrent-mark-start]
>>>> 1764.819: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1764.819: [CMS-concurrent-preclean-start]
>>>> 1764.820: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1764.820: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1769.835:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.02 secs]
>>>> 1769.835: [GC[YG occupancy: 3015 K (118016 K)]1769.835: [Rescan
>>>> (parallel) , 0.0010360 secs]1769.836: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 27068K(158108K), 0.0011310 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1769.837: [CMS-concurrent-sweep-start]
>>>> 1769.840: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1769.840: [CMS-concurrent-reset-start]
>>>> 1769.849: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1771.850: [GC [1 CMS-initial-mark: 24053K(40092K)] 27196K(158108K),
>>>> 0.0014740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1771.851: [CMS-concurrent-mark-start]
>>>> 1771.868: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1771.868: [CMS-concurrent-preclean-start]
>>>> 1771.868: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1771.868: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1776.913:
>>>> [CMS-concurrent-abortable-preclean: 0.112/5.044 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.04 secs]
>>>> 1776.913: [GC[YG occupancy: 4052 K (118016 K)]1776.913: [Rescan
>>>> (parallel) , 0.0017790 secs]1776.915: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 24053K(40092K)] 28105K(158108K), 0.0018790 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1776.915: [CMS-concurrent-sweep-start]
>>>> 1776.918: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1776.918: [CMS-concurrent-reset-start]
>>>> 1776.927: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1778.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 28233K(158108K),
>>>> 0.0015470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1778.929: [CMS-concurrent-mark-start]
>>>> 1778.947: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1778.947: [CMS-concurrent-preclean-start]
>>>> 1778.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1778.947: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1783.963:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 1783.963: [GC[YG occupancy: 4505 K (118016 K)]1783.963: [Rescan
>>>> (parallel) , 0.0014480 secs]1783.965: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 28558K(158108K), 0.0015470 secs]
>>>> [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1783.965: [CMS-concurrent-sweep-start]
>>>> 1783.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1783.968: [CMS-concurrent-reset-start]
>>>> 1783.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1785.978: [GC [1 CMS-initial-mark: 24053K(40092K)] 28686K(158108K),
>>>> 0.0015760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1785.979: [CMS-concurrent-mark-start]
>>>> 1785.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1785.996: [CMS-concurrent-preclean-start]
>>>> 1785.996: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1785.996: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1791.009:
>>>> [CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 1791.010: [GC[YG occupancy: 4954 K (118016 K)]1791.010: [Rescan
>>>> (parallel) , 0.0020030 secs]1791.012: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 24053K(40092K)] 29007K(158108K), 0.0021040 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1791.012: [CMS-concurrent-sweep-start]
>>>> 1791.015: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1791.015: [CMS-concurrent-reset-start]
>>>> 1791.023: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1793.023: [GC [1 CMS-initial-mark: 24053K(40092K)] 29136K(158108K),
>>>> 0.0017200 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1793.025: [CMS-concurrent-mark-start]
>>>> 1793.044: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.08
>>>> sys=0.00, real=0.02 secs]
>>>> 1793.044: [CMS-concurrent-preclean-start]
>>>> 1793.045: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1793.045: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1798.137:
>>>> [CMS-concurrent-abortable-preclean: 0.112/5.093 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.09 secs]
>>>> 1798.137: [GC[YG occupancy: 6539 K (118016 K)]1798.137: [Rescan
>>>> (parallel) , 0.0016650 secs]1798.139: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 24053K(40092K)] 30592K(158108K), 0.0017600 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1798.139: [CMS-concurrent-sweep-start]
>>>> 1798.143: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1798.143: [CMS-concurrent-reset-start]
>>>> 1798.152: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1800.152: [GC [1 CMS-initial-mark: 24053K(40092K)] 30721K(158108K),
>>>> 0.0016650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1800.154: [CMS-concurrent-mark-start]
>>>> 1800.170: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.01 secs]
>>>> 1800.170: [CMS-concurrent-preclean-start]
>>>> 1800.171: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1800.171: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1805.181:
>>>> [CMS-concurrent-abortable-preclean: 0.110/5.010 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.01 secs]
>>>> 1805.181: [GC[YG occupancy: 8090 K (118016 K)]1805.181: [Rescan
>>>> (parallel) , 0.0018850 secs]1805.183: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 24053K(40092K)] 32143K(158108K), 0.0019860 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1805.183: [CMS-concurrent-sweep-start]
>>>> 1805.187: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1805.187: [CMS-concurrent-reset-start]
>>>> 1805.196: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1807.196: [GC [1 CMS-initial-mark: 24053K(40092K)] 32272K(158108K),
>>>> 0.0018760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1807.198: [CMS-concurrent-mark-start]
>>>> 1807.216: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1807.216: [CMS-concurrent-preclean-start]
>>>> 1807.216: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1807.216: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1812.232:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 1812.232: [GC[YG occupancy: 8543 K (118016 K)]1812.232: [Rescan
>>>> (parallel) , 0.0020890 secs]1812.234: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 32596K(158108K), 0.0021910 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1812.234: [CMS-concurrent-sweep-start]
>>>> 1812.238: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1812.238: [CMS-concurrent-reset-start]
>>>> 1812.247: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1812.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 32661K(158108K),
>>>> 0.0019710 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1812.930: [CMS-concurrent-mark-start]
>>>> 1812.947: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1812.947: [CMS-concurrent-preclean-start]
>>>> 1812.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1812.948: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1817.963:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 1817.963: [GC[YG occupancy: 8928 K (118016 K)]1817.963: [Rescan
>>>> (parallel) , 0.0011790 secs]1817.964: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 32981K(158108K), 0.0012750 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1817.964: [CMS-concurrent-sweep-start]
>>>> 1817.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1817.968: [CMS-concurrent-reset-start]
>>>> 1817.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1819.977: [GC [1 CMS-initial-mark: 24053K(40092K)] 33110K(158108K),
>>>> 0.0018900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1819.979: [CMS-concurrent-mark-start]
>>>> 1819.996: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1819.997: [CMS-concurrent-preclean-start]
>>>> 1819.997: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1819.997: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1825.012:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 1825.013: [GC[YG occupancy: 9377 K (118016 K)]1825.013: [Rescan
>>>> (parallel) , 0.0020580 secs]1825.015: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 33431K(158108K), 0.0021510 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 1825.015: [CMS-concurrent-sweep-start]
>>>> 1825.018: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1825.018: [CMS-concurrent-reset-start]
>>>> 1825.027: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1827.028: [GC [1 CMS-initial-mark: 24053K(40092K)] 33559K(158108K),
>>>> 0.0019140 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1827.030: [CMS-concurrent-mark-start]
>>>> 1827.047: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1827.047: [CMS-concurrent-preclean-start]
>>>> 1827.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1827.047: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1832.066:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.018 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.02 secs]
>>>> 1832.066: [GC[YG occupancy: 9827 K (118016 K)]1832.066: [Rescan
>>>> (parallel) , 0.0019440 secs]1832.068: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 33880K(158108K), 0.0020410 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1832.068: [CMS-concurrent-sweep-start]
>>>> 1832.071: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1832.071: [CMS-concurrent-reset-start]
>>>> 1832.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1832.935: [GC [1 CMS-initial-mark: 24053K(40092K)] 34093K(158108K),
>>>> 0.0019830 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1832.937: [CMS-concurrent-mark-start]
>>>> 1832.954: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1832.954: [CMS-concurrent-preclean-start]
>>>> 1832.955: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1832.955: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1837.970:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 1837.970: [GC[YG occupancy: 10349 K (118016 K)]1837.970: [Rescan
>>>> (parallel) , 0.0019670 secs]1837.972: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 24053K(40092K)] 34402K(158108K), 0.0020800 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1837.972: [CMS-concurrent-sweep-start]
>>>> 1837.976: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1837.976: [CMS-concurrent-reset-start]
>>>> 1837.985: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1839.985: [GC [1 CMS-initial-mark: 24053K(40092K)] 34531K(158108K),
>>>> 0.0020220 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1839.987: [CMS-concurrent-mark-start]
>>>> 1840.005: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.06
>>>> sys=0.01, real=0.02 secs]
>>>> 1840.005: [CMS-concurrent-preclean-start]
>>>> 1840.006: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1840.006: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1845.018:
>>>> [CMS-concurrent-abortable-preclean: 0.106/5.012 secs] [Times:
>>>> user=0.10 sys=0.01, real=5.01 secs]
>>>> 1845.018: [GC[YG occupancy: 10798 K (118016 K)]1845.018: [Rescan
>>>> (parallel) , 0.0015500 secs]1845.019: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 34851K(158108K), 0.0016500 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1845.020: [CMS-concurrent-sweep-start]
>>>> 1845.023: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1845.023: [CMS-concurrent-reset-start]
>>>> 1845.032: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1847.032: [GC [1 CMS-initial-mark: 24053K(40092K)] 34980K(158108K),
>>>> 0.0020600 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1847.035: [CMS-concurrent-mark-start]
>>>> 1847.051: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.01 secs]
>>>> 1847.051: [CMS-concurrent-preclean-start]
>>>> 1847.052: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1847.052: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1852.067:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.02 secs]
>>>> 1852.067: [GC[YG occupancy: 11247 K (118016 K)]1852.067: [Rescan
>>>> (parallel) , 0.0011880 secs]1852.069: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 35300K(158108K), 0.0012900 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1852.069: [CMS-concurrent-sweep-start]
>>>> 1852.072: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1852.072: [CMS-concurrent-reset-start]
>>>> 1852.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1854.082: [GC [1 CMS-initial-mark: 24053K(40092K)] 35429K(158108K),
>>>> 0.0021010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1854.084: [CMS-concurrent-mark-start]
>>>> 1854.100: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1854.100: [CMS-concurrent-preclean-start]
>>>> 1854.101: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1854.101: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1859.116:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.02 secs]
>>>> 1859.116: [GC[YG occupancy: 11701 K (118016 K)]1859.117: [Rescan
>>>> (parallel) , 0.0010230 secs]1859.118: [weak refs processing, 0.0000130
>>>> secs] [1 CMS-remark: 24053K(40092K)] 35754K(158108K), 0.0011230 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1859.118: [CMS-concurrent-sweep-start]
>>>> 1859.121: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1859.121: [CMS-concurrent-reset-start]
>>>> 1859.130: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1861.131: [GC [1 CMS-initial-mark: 24053K(40092K)] 35882K(158108K),
>>>> 0.0021240 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1861.133: [CMS-concurrent-mark-start]
>>>> 1861.149: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1861.149: [CMS-concurrent-preclean-start]
>>>> 1861.150: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1861.150: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1866.220:
>>>> [CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.07 secs]
>>>> 1866.220: [GC[YG occupancy: 12388 K (118016 K)]1866.220: [Rescan
>>>> (parallel) , 0.0027090 secs]1866.223: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 36441K(158108K), 0.0028070 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>>>> 1866.223: [CMS-concurrent-sweep-start]
>>>> 1866.227: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1866.227: [CMS-concurrent-reset-start]
>>>> 1866.236: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1868.236: [GC [1 CMS-initial-mark: 24053K(40092K)] 36569K(158108K),
>>>> 0.0023650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1868.239: [CMS-concurrent-mark-start]
>>>> 1868.256: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1868.256: [CMS-concurrent-preclean-start]
>>>> 1868.257: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1868.257: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1873.267:
>>>> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
>>>> user=0.13 sys=0.00, real=5.01 secs]
>>>> 1873.268: [GC[YG occupancy: 12837 K (118016 K)]1873.268: [Rescan
>>>> (parallel) , 0.0018720 secs]1873.270: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 36890K(158108K), 0.0019730 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1873.270: [CMS-concurrent-sweep-start]
>>>> 1873.273: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1873.273: [CMS-concurrent-reset-start]
>>>> 1873.282: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1875.283: [GC [1 CMS-initial-mark: 24053K(40092K)] 37018K(158108K),
>>>> 0.0024410 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1875.285: [CMS-concurrent-mark-start]
>>>> 1875.302: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1875.302: [CMS-concurrent-preclean-start]
>>>> 1875.302: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1875.303: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1880.318:
>>>> [CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.02 secs]
>>>> 1880.318: [GC[YG occupancy: 13286 K (118016 K)]1880.318: [Rescan
>>>> (parallel) , 0.0023860 secs]1880.321: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 24053K(40092K)] 37339K(158108K), 0.0024910 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1880.321: [CMS-concurrent-sweep-start]
>>>> 1880.324: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1880.324: [CMS-concurrent-reset-start]
>>>> 1880.333: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 1882.334: [GC [1 CMS-initial-mark: 24053K(40092K)] 37467K(158108K),
>>>> 0.0024090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1882.336: [CMS-concurrent-mark-start]
>>>> 1882.352: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1882.352: [CMS-concurrent-preclean-start]
>>>> 1882.353: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1882.353: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1887.368:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.02 secs]
>>>> 1887.368: [GC[YG occupancy: 13739 K (118016 K)]1887.368: [Rescan
>>>> (parallel) , 0.0022370 secs]1887.370: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 37792K(158108K), 0.0023360 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1887.371: [CMS-concurrent-sweep-start]
>>>> 1887.374: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1887.374: [CMS-concurrent-reset-start]
>>>> 1887.383: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1889.384: [GC [1 CMS-initial-mark: 24053K(40092K)] 37920K(158108K),
>>>> 0.0024690 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1889.386: [CMS-concurrent-mark-start]
>>>> 1889.404: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1889.404: [CMS-concurrent-preclean-start]
>>>> 1889.405: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.01 sys=0.00, real=0.00 secs]
>>>> 1889.405: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1894.488:
>>>> [CMS-concurrent-abortable-preclean: 0.112/5.083 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.08 secs]
>>>> 1894.488: [GC[YG occupancy: 14241 K (118016 K)]1894.488: [Rescan
>>>> (parallel) , 0.0020670 secs]1894.490: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 38294K(158108K), 0.0021630 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1894.490: [CMS-concurrent-sweep-start]
>>>> 1894.494: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1894.494: [CMS-concurrent-reset-start]
>>>> 1894.503: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1896.503: [GC [1 CMS-initial-mark: 24053K(40092K)] 38422K(158108K),
>>>> 0.0025430 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1896.506: [CMS-concurrent-mark-start]
>>>> 1896.524: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1896.524: [CMS-concurrent-preclean-start]
>>>> 1896.525: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1896.525: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1901.540:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 1901.540: [GC[YG occupancy: 14690 K (118016 K)]1901.540: [Rescan
>>>> (parallel) , 0.0014810 secs]1901.542: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 38743K(158108K), 0.0015820 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1901.542: [CMS-concurrent-sweep-start]
>>>> 1901.545: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1901.545: [CMS-concurrent-reset-start]
>>>> 1901.555: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1903.555: [GC [1 CMS-initial-mark: 24053K(40092K)] 38871K(158108K),
>>>> 0.0025990 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1903.558: [CMS-concurrent-mark-start]
>>>> 1903.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1903.575: [CMS-concurrent-preclean-start]
>>>> 1903.576: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1903.576: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1908.586:
>>>> [CMS-concurrent-abortable-preclean: 0.105/5.010 secs] [Times:
>>>> user=0.10 sys=0.00, real=5.01 secs]
>>>> 1908.587: [GC[YG occupancy: 15207 K (118016 K)]1908.587: [Rescan
>>>> (parallel) , 0.0026240 secs]1908.589: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 39260K(158108K), 0.0027260 secs]
>>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>>> 1908.589: [CMS-concurrent-sweep-start]
>>>> 1908.593: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1908.593: [CMS-concurrent-reset-start]
>>>> 1908.602: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1910.602: [GC [1 CMS-initial-mark: 24053K(40092K)] 39324K(158108K),
>>>> 0.0025610 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1910.605: [CMS-concurrent-mark-start]
>>>> 1910.621: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1910.621: [CMS-concurrent-preclean-start]
>>>> 1910.622: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.01 sys=0.00, real=0.00 secs]
>>>> 1910.622: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1915.684:
>>>> [CMS-concurrent-abortable-preclean: 0.112/5.062 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.07 secs]
>>>> 1915.684: [GC[YG occupancy: 15592 K (118016 K)]1915.684: [Rescan
>>>> (parallel) , 0.0023940 secs]1915.687: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 24053K(40092K)] 39645K(158108K), 0.0025050 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1915.687: [CMS-concurrent-sweep-start]
>>>> 1915.690: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1915.690: [CMS-concurrent-reset-start]
>>>> 1915.699: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1917.700: [GC [1 CMS-initial-mark: 24053K(40092K)] 39838K(158108K),
>>>> 0.0025010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1917.702: [CMS-concurrent-mark-start]
>>>> 1917.719: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1917.719: [CMS-concurrent-preclean-start]
>>>> 1917.719: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1917.719: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1922.735:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.01, real=5.02 secs]
>>>> 1922.735: [GC[YG occupancy: 16198 K (118016 K)]1922.735: [Rescan
>>>> (parallel) , 0.0028750 secs]1922.738: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 40251K(158108K), 0.0029760 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1922.738: [CMS-concurrent-sweep-start]
>>>> 1922.741: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1922.741: [CMS-concurrent-reset-start]
>>>> 1922.751: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1922.957: [GC [1 CMS-initial-mark: 24053K(40092K)] 40324K(158108K),
>>>> 0.0027380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1922.960: [CMS-concurrent-mark-start]
>>>> 1922.978: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1922.978: [CMS-concurrent-preclean-start]
>>>> 1922.979: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1922.979: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1927.994:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.02 secs]
>>>> 1927.995: [GC[YG occupancy: 16645 K (118016 K)]1927.995: [Rescan
>>>> (parallel) , 0.0013210 secs]1927.996: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 40698K(158108K), 0.0017610 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1927.996: [CMS-concurrent-sweep-start]
>>>> 1928.000: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1928.000: [CMS-concurrent-reset-start]
>>>> 1928.009: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1930.009: [GC [1 CMS-initial-mark: 24053K(40092K)] 40826K(158108K),
>>>> 0.0028310 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1930.012: [CMS-concurrent-mark-start]
>>>> 1930.028: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1930.028: [CMS-concurrent-preclean-start]
>>>> 1930.029: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1930.029: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1935.044:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 1935.045: [GC[YG occupancy: 17098 K (118016 K)]1935.045: [Rescan
>>>> (parallel) , 0.0015440 secs]1935.046: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 24053K(40092K)] 41151K(158108K), 0.0016490 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1935.046: [CMS-concurrent-sweep-start]
>>>> 1935.050: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1935.050: [CMS-concurrent-reset-start]
>>>> 1935.059: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1937.059: [GC [1 CMS-initial-mark: 24053K(40092K)] 41279K(158108K),
>>>> 0.0028290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1937.062: [CMS-concurrent-mark-start]
>>>> 1937.079: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1937.079: [CMS-concurrent-preclean-start]
>>>> 1937.079: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1937.079: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1942.095:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.01, real=5.02 secs]
>>>> 1942.095: [GC[YG occupancy: 17547 K (118016 K)]1942.095: [Rescan
>>>> (parallel) , 0.0030270 secs]1942.098: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 41600K(158108K), 0.0031250 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1942.098: [CMS-concurrent-sweep-start]
>>>> 1942.101: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1942.101: [CMS-concurrent-reset-start]
>>>> 1942.111: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1944.111: [GC [1 CMS-initial-mark: 24053K(40092K)] 41728K(158108K),
>>>> 0.0028080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1944.114: [CMS-concurrent-mark-start]
>>>> 1944.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1944.130: [CMS-concurrent-preclean-start]
>>>> 1944.131: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1944.131: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1949.146:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.02 secs]
>>>> 1949.146: [GC[YG occupancy: 17996 K (118016 K)]1949.146: [Rescan
>>>> (parallel) , 0.0028800 secs]1949.149: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 24053K(40092K)] 42049K(158108K), 0.0029810 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1949.149: [CMS-concurrent-sweep-start]
>>>> 1949.152: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1949.152: [CMS-concurrent-reset-start]
>>>> 1949.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1951.162: [GC [1 CMS-initial-mark: 24053K(40092K)] 42177K(158108K),
>>>> 0.0028760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1951.165: [CMS-concurrent-mark-start]
>>>> 1951.184: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1951.184: [CMS-concurrent-preclean-start]
>>>> 1951.184: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1951.184: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1956.244:
>>>> [CMS-concurrent-abortable-preclean: 0.112/5.059 secs] [Times:
>>>> user=0.11 sys=0.01, real=5.05 secs]
>>>> 1956.244: [GC[YG occupancy: 18498 K (118016 K)]1956.244: [Rescan
>>>> (parallel) , 0.0019760 secs]1956.246: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 42551K(158108K), 0.0020750 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>>> 1956.246: [CMS-concurrent-sweep-start]
>>>> 1956.249: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1956.249: [CMS-concurrent-reset-start]
>>>> 1956.259: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1958.259: [GC [1 CMS-initial-mark: 24053K(40092K)] 42747K(158108K),
>>>> 0.0029160 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1958.262: [CMS-concurrent-mark-start]
>>>> 1958.279: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1958.279: [CMS-concurrent-preclean-start]
>>>> 1958.279: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1958.279: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1963.295:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 1963.295: [GC[YG occupancy: 18951 K (118016 K)]1963.295: [Rescan
>>>> (parallel) , 0.0020140 secs]1963.297: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 43004K(158108K), 0.0021100 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1963.297: [CMS-concurrent-sweep-start]
>>>> 1963.300: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1963.300: [CMS-concurrent-reset-start]
>>>> 1963.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1965.310: [GC [1 CMS-initial-mark: 24053K(40092K)] 43132K(158108K),
>>>> 0.0029420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1965.313: [CMS-concurrent-mark-start]
>>>> 1965.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1965.329: [CMS-concurrent-preclean-start]
>>>> 1965.330: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1965.330: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1970.345:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.02 secs]
>>>> 1970.345: [GC[YG occupancy: 19400 K (118016 K)]1970.345: [Rescan
>>>> (parallel) , 0.0031610 secs]1970.349: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 43453K(158108K), 0.0032580 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1970.349: [CMS-concurrent-sweep-start]
>>>> 1970.352: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1970.352: [CMS-concurrent-reset-start]
>>>> 1970.361: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1972.362: [GC [1 CMS-initial-mark: 24053K(40092K)] 43581K(158108K),
>>>> 0.0029960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1972.365: [CMS-concurrent-mark-start]
>>>> 1972.381: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 1972.381: [CMS-concurrent-preclean-start]
>>>> 1972.382: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1972.382: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1977.397:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.02 secs]
>>>> 1977.398: [GC[YG occupancy: 19849 K (118016 K)]1977.398: [Rescan
>>>> (parallel) , 0.0018110 secs]1977.399: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 43902K(158108K), 0.0019100 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1977.400: [CMS-concurrent-sweep-start]
>>>> 1977.403: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1977.403: [CMS-concurrent-reset-start]
>>>> 1977.412: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1979.413: [GC [1 CMS-initial-mark: 24053K(40092K)] 44031K(158108K),
>>>> 0.0030240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 1979.416: [CMS-concurrent-mark-start]
>>>> 1979.434: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
>>>> sys=0.00, real=0.02 secs]
>>>> 1979.434: [CMS-concurrent-preclean-start]
>>>> 1979.434: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1979.434: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1984.511:
>>>> [CMS-concurrent-abortable-preclean: 0.112/5.077 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.07 secs]
>>>> 1984.511: [GC[YG occupancy: 20556 K (118016 K)]1984.511: [Rescan
>>>> (parallel) , 0.0032740 secs]1984.514: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 44609K(158108K), 0.0033720 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>>>> 1984.515: [CMS-concurrent-sweep-start]
>>>> 1984.518: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1984.518: [CMS-concurrent-reset-start]
>>>> 1984.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1986.528: [GC [1 CMS-initial-mark: 24053K(40092K)] 44737K(158108K),
>>>> 0.0032890 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1986.531: [CMS-concurrent-mark-start]
>>>> 1986.548: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 1986.548: [CMS-concurrent-preclean-start]
>>>> 1986.548: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1986.548: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1991.564:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.02 secs]
>>>> 1991.564: [GC[YG occupancy: 21005 K (118016 K)]1991.564: [Rescan
>>>> (parallel) , 0.0022540 secs]1991.566: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 45058K(158108K), 0.0023650 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 1991.566: [CMS-concurrent-sweep-start]
>>>> 1991.570: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 1991.570: [CMS-concurrent-reset-start]
>>>> 1991.579: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 1993.579: [GC [1 CMS-initial-mark: 24053K(40092K)] 45187K(158108K),
>>>> 0.0032480 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 1993.583: [CMS-concurrent-mark-start]
>>>> 1993.599: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 1993.599: [CMS-concurrent-preclean-start]
>>>> 1993.600: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 1993.600: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 1998.688:
>>>> [CMS-concurrent-abortable-preclean: 0.112/5.089 secs] [Times:
>>>> user=0.10 sys=0.01, real=5.09 secs]
>>>> 1998.689: [GC[YG occupancy: 21454 K (118016 K)]1998.689: [Rescan
>>>> (parallel) , 0.0025510 secs]1998.691: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 45507K(158108K), 0.0026500 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>>> 1998.691: [CMS-concurrent-sweep-start]
>>>> 1998.695: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 1998.695: [CMS-concurrent-reset-start]
>>>> 1998.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 2000.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 45636K(158108K),
>>>> 0.0033350 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2000.708: [CMS-concurrent-mark-start]
>>>> 2000.726: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2000.726: [CMS-concurrent-preclean-start]
>>>> 2000.726: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2000.726: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2005.742:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.01 secs]
>>>> 2005.742: [GC[YG occupancy: 21968 K (118016 K)]2005.742: [Rescan
>>>> (parallel) , 0.0027300 secs]2005.745: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 46021K(158108K), 0.0028560 secs]
>>>> [Times: user=0.02 sys=0.01, real=0.01 secs]
>>>> 2005.745: [CMS-concurrent-sweep-start]
>>>> 2005.748: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2005.748: [CMS-concurrent-reset-start]
>>>> 2005.757: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.01, real=0.01 secs]
>>>> 2007.758: [GC [1 CMS-initial-mark: 24053K(40092K)] 46217K(158108K),
>>>> 0.0033290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2007.761: [CMS-concurrent-mark-start]
>>>> 2007.778: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 2007.778: [CMS-concurrent-preclean-start]
>>>> 2007.778: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2007.778: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2012.794:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.02 secs]
>>>> 2012.794: [GC[YG occupancy: 22421 K (118016 K)]2012.794: [Rescan
>>>> (parallel) , 0.0036890 secs]2012.798: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 46474K(158108K), 0.0037910 secs]
>>>> [Times: user=0.02 sys=0.01, real=0.00 secs]
>>>> 2012.798: [CMS-concurrent-sweep-start]
>>>> 2012.801: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2012.801: [CMS-concurrent-reset-start]
>>>> 2012.810: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2012.980: [GC [1 CMS-initial-mark: 24053K(40092K)] 46547K(158108K),
>>>> 0.0033990 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 2012.984: [CMS-concurrent-mark-start]
>>>> 2013.004: [CMS-concurrent-mark: 0.019/0.020 secs] [Times: user=0.06
>>>> sys=0.01, real=0.02 secs]
>>>> 2013.004: [CMS-concurrent-preclean-start]
>>>> 2013.005: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2013.005: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2018.020:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.01 secs]
>>>> 2018.020: [GC[YG occupancy: 22867 K (118016 K)]2018.020: [Rescan
>>>> (parallel) , 0.0025180 secs]2018.023: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 46920K(158108K), 0.0026190 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>>> 2018.023: [CMS-concurrent-sweep-start]
>>>> 2018.026: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 2018.026: [CMS-concurrent-reset-start]
>>>> 2018.036: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 2020.036: [GC [1 CMS-initial-mark: 24053K(40092K)] 47048K(158108K),
>>>> 0.0034020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2020.039: [CMS-concurrent-mark-start]
>>>> 2020.057: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2020.057: [CMS-concurrent-preclean-start]
>>>> 2020.058: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2020.058: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2025.073:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 2025.073: [GC[YG occupancy: 23316 K (118016 K)]2025.073: [Rescan
>>>> (parallel) , 0.0020110 secs]2025.075: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 47369K(158108K), 0.0021080 secs]
>>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>>> 2025.075: [CMS-concurrent-sweep-start]
>>>> 2025.079: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2025.079: [CMS-concurrent-reset-start]
>>>> 2025.088: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2027.088: [GC [1 CMS-initial-mark: 24053K(40092K)] 47498K(158108K),
>>>> 0.0034100 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2027.092: [CMS-concurrent-mark-start]
>>>> 2027.108: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2027.108: [CMS-concurrent-preclean-start]
>>>> 2027.109: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2027.109: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2032.120:
>>>> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
>>>> user=0.10 sys=0.00, real=5.01 secs]
>>>> 2032.120: [GC[YG occupancy: 23765 K (118016 K)]2032.120: [Rescan
>>>> (parallel) , 0.0025970 secs]2032.123: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 47818K(158108K), 0.0026940 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>>> 2032.123: [CMS-concurrent-sweep-start]
>>>> 2032.126: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 2032.126: [CMS-concurrent-reset-start]
>>>> 2032.135: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 2034.136: [GC [1 CMS-initial-mark: 24053K(40092K)] 47951K(158108K),
>>>> 0.0034720 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2034.139: [CMS-concurrent-mark-start]
>>>> 2034.156: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2034.156: [CMS-concurrent-preclean-start]
>>>> 2034.156: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2034.156: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2039.171:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 2039.172: [GC[YG occupancy: 24218 K (118016 K)]2039.172: [Rescan
>>>> (parallel) , 0.0038590 secs]2039.176: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 48271K(158108K), 0.0039560 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 2039.176: [CMS-concurrent-sweep-start]
>>>> 2039.179: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2039.179: [CMS-concurrent-reset-start]
>>>> 2039.188: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2041.188: [GC [1 CMS-initial-mark: 24053K(40092K)] 48400K(158108K),
>>>> 0.0035110 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2041.192: [CMS-concurrent-mark-start]
>>>> 2041.209: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 2041.209: [CMS-concurrent-preclean-start]
>>>> 2041.209: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2041.209: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2046.268:
>>>> [CMS-concurrent-abortable-preclean: 0.108/5.058 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.06 secs]
>>>> 2046.268: [GC[YG occupancy: 24813 K (118016 K)]2046.268: [Rescan
>>>> (parallel) , 0.0042050 secs]2046.272: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 48866K(158108K), 0.0043070 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>>> 2046.272: [CMS-concurrent-sweep-start]
>>>> 2046.275: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 2046.275: [CMS-concurrent-reset-start]
>>>> 2046.285: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2048.285: [GC [1 CMS-initial-mark: 24053K(40092K)] 48994K(158108K),
>>>> 0.0037700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2048.289: [CMS-concurrent-mark-start]
>>>> 2048.307: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2048.307: [CMS-concurrent-preclean-start]
>>>> 2048.307: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2048.307: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2053.323:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 2053.323: [GC[YG occupancy: 25262 K (118016 K)]2053.323: [Rescan
>>>> (parallel) , 0.0030780 secs]2053.326: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 49315K(158108K), 0.0031760 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 2053.326: [CMS-concurrent-sweep-start]
>>>> 2053.329: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2053.329: [CMS-concurrent-reset-start]
>>>> 2053.338: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2055.339: [GC [1 CMS-initial-mark: 24053K(40092K)] 49444K(158108K),
>>>> 0.0037730 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2055.343: [CMS-concurrent-mark-start]
>>>> 2055.359: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 2055.359: [CMS-concurrent-preclean-start]
>>>> 2055.360: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2055.360: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2060.373:
>>>> [CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 2060.373: [GC[YG occupancy: 25715 K (118016 K)]2060.373: [Rescan
>>>> (parallel) , 0.0037090 secs]2060.377: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 49768K(158108K), 0.0038110 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 2060.377: [CMS-concurrent-sweep-start]
>>>> 2060.380: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2060.380: [CMS-concurrent-reset-start]
>>>> 2060.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2062.390: [GC [1 CMS-initial-mark: 24053K(40092K)] 49897K(158108K),
>>>> 0.0037860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2062.394: [CMS-concurrent-mark-start]
>>>> 2062.410: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2062.410: [CMS-concurrent-preclean-start]
>>>> 2062.411: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2062.411: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2067.426:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.02 secs]
>>>> 2067.427: [GC[YG occupancy: 26231 K (118016 K)]2067.427: [Rescan
>>>> (parallel) , 0.0031980 secs]2067.430: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 50284K(158108K), 0.0033100 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>>> 2067.430: [CMS-concurrent-sweep-start]
>>>> 2067.433: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 2067.433: [CMS-concurrent-reset-start]
>>>> 2067.443: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2069.443: [GC [1 CMS-initial-mark: 24053K(40092K)] 50412K(158108K),
>>>> 0.0038060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 2069.447: [CMS-concurrent-mark-start]
>>>> 2069.465: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2069.465: [CMS-concurrent-preclean-start]
>>>> 2069.465: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2069.465: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2074.535:
>>>> [CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.06 secs]
>>>> 2074.535: [GC[YG occupancy: 26749 K (118016 K)]2074.535: [Rescan
>>>> (parallel) , 0.0040450 secs]2074.539: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 50802K(158108K), 0.0041460 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>>> 2074.539: [CMS-concurrent-sweep-start]
>>>> 2074.543: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2074.543: [CMS-concurrent-reset-start]
>>>> 2074.552: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 2076.552: [GC [1 CMS-initial-mark: 24053K(40092K)] 50930K(158108K),
>>>> 0.0038960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 2076.556: [CMS-concurrent-mark-start]
>>>> 2076.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2076.575: [CMS-concurrent-preclean-start]
>>>> 2076.575: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2076.575: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2081.590:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.014 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 2081.590: [GC[YG occupancy: 27198 K (118016 K)]2081.590: [Rescan
>>>> (parallel) , 0.0042420 secs]2081.594: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 51251K(158108K), 0.0043450 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 2081.594: [CMS-concurrent-sweep-start]
>>>> 2081.597: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2081.597: [CMS-concurrent-reset-start]
>>>> 2081.607: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2083.607: [GC [1 CMS-initial-mark: 24053K(40092K)] 51447K(158108K),
>>>> 0.0038630 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2083.611: [CMS-concurrent-mark-start]
>>>> 2083.628: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2083.628: [CMS-concurrent-preclean-start]
>>>> 2083.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2083.628: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2088.642:
>>>> [CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 2088.642: [GC[YG occupancy: 27651 K (118016 K)]2088.642: [Rescan
>>>> (parallel) , 0.0031520 secs]2088.645: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 51704K(158108K), 0.0032520 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 2088.645: [CMS-concurrent-sweep-start]
>>>> 2088.649: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2088.649: [CMS-concurrent-reset-start]
>>>> 2088.658: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2090.658: [GC [1 CMS-initial-mark: 24053K(40092K)] 51832K(158108K),
>>>> 0.0039130 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2090.662: [CMS-concurrent-mark-start]
>>>> 2090.678: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 2090.678: [CMS-concurrent-preclean-start]
>>>> 2090.679: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2090.679: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2095.690:
>>>> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 2095.690: [GC[YG occupancy: 28100 K (118016 K)]2095.690: [Rescan
>>>> (parallel) , 0.0024460 secs]2095.693: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 52153K(158108K), 0.0025460 secs]
>>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>>> 2095.693: [CMS-concurrent-sweep-start]
>>>> 2095.696: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 2095.696: [CMS-concurrent-reset-start]
>>>> 2095.705: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 2096.616: [GC [1 CMS-initial-mark: 24053K(40092K)] 53289K(158108K),
>>>> 0.0039340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2096.620: [CMS-concurrent-mark-start]
>>>> 2096.637: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 2096.637: [CMS-concurrent-preclean-start]
>>>> 2096.638: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2096.638: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2101.654:
>>>> [CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.01 secs]
>>>> 2101.654: [GC[YG occupancy: 29557 K (118016 K)]2101.654: [Rescan
>>>> (parallel) , 0.0034020 secs]2101.657: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 53610K(158108K), 0.0035000 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>>> 2101.657: [CMS-concurrent-sweep-start]
>>>> 2101.661: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2101.661: [CMS-concurrent-reset-start]
>>>> 2101.670: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2103.004: [GC [1 CMS-initial-mark: 24053K(40092K)] 53997K(158108K),
>>>> 0.0042590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2103.009: [CMS-concurrent-mark-start]
>>>> 2103.027: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2103.027: [CMS-concurrent-preclean-start]
>>>> 2103.028: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2103.028: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2108.043:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.10 sys=0.01, real=5.02 secs]
>>>> 2108.043: [GC[YG occupancy: 30385 K (118016 K)]2108.044: [Rescan
>>>> (parallel) , 0.0048950 secs]2108.048: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 54438K(158108K), 0.0049930 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>>> 2108.049: [CMS-concurrent-sweep-start]
>>>> 2108.052: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2108.052: [CMS-concurrent-reset-start]
>>>> 2108.061: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2110.062: [GC [1 CMS-initial-mark: 24053K(40092K)] 54502K(158108K),
>>>> 0.0042120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>>> 2110.066: [CMS-concurrent-mark-start]
>>>> 2110.084: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2110.084: [CMS-concurrent-preclean-start]
>>>> 2110.085: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2110.085: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2115.100:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 2115.101: [GC[YG occupancy: 30770 K (118016 K)]2115.101: [Rescan
>>>> (parallel) , 0.0049040 secs]2115.106: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 54823K(158108K), 0.0050080 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 2115.106: [CMS-concurrent-sweep-start]
>>>> 2115.109: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2115.109: [CMS-concurrent-reset-start]
>>>> 2115.118: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2117.118: [GC [1 CMS-initial-mark: 24053K(40092K)] 54952K(158108K),
>>>> 0.0042490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2117.123: [CMS-concurrent-mark-start]
>>>> 2117.139: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 2117.139: [CMS-concurrent-preclean-start]
>>>> 2117.140: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2117.140: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2122.155:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.02 secs]
>>>> 2122.155: [GC[YG occupancy: 31219 K (118016 K)]2122.155: [Rescan
>>>> (parallel) , 0.0036460 secs]2122.159: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 55272K(158108K), 0.0037440 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>>> 2122.159: [CMS-concurrent-sweep-start]
>>>> 2122.162: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2122.162: [CMS-concurrent-reset-start]
>>>> 2122.172: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2124.172: [GC [1 CMS-initial-mark: 24053K(40092K)] 55401K(158108K),
>>>> 0.0043010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 2124.176: [CMS-concurrent-mark-start]
>>>> 2124.195: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2124.195: [CMS-concurrent-preclean-start]
>>>> 2124.195: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2124.195: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2129.211:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.01 secs]
>>>> 2129.211: [GC[YG occupancy: 31669 K (118016 K)]2129.211: [Rescan
>>>> (parallel) , 0.0049870 secs]2129.216: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 55722K(158108K), 0.0050860 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>>> 2129.216: [CMS-concurrent-sweep-start]
>>>> 2129.219: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 2129.219: [CMS-concurrent-reset-start]
>>>> 2129.228: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2131.229: [GC [1 CMS-initial-mark: 24053K(40092K)] 55850K(158108K),
>>>> 0.0042340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2131.233: [CMS-concurrent-mark-start]
>>>> 2131.249: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2131.249: [CMS-concurrent-preclean-start]
>>>> 2131.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2131.249: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2136.292:
>>>> [CMS-concurrent-abortable-preclean: 0.108/5.042 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.04 secs]
>>>> 2136.292: [GC[YG occupancy: 32174 K (118016 K)]2136.292: [Rescan
>>>> (parallel) , 0.0037250 secs]2136.296: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 56227K(158108K), 0.0038250 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 2136.296: [CMS-concurrent-sweep-start]
>>>> 2136.299: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2136.299: [CMS-concurrent-reset-start]
>>>> 2136.308: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2138.309: [GC [1 CMS-initial-mark: 24053K(40092K)] 56356K(158108K),
>>>> 0.0043040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2138.313: [CMS-concurrent-mark-start]
>>>> 2138.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
>>>> sys=0.01, real=0.02 secs]
>>>> 2138.329: [CMS-concurrent-preclean-start]
>>>> 2138.329: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2138.329: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2143.341:
>>>> [CMS-concurrent-abortable-preclean: 0.106/5.011 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 2143.341: [GC[YG occupancy: 32623 K (118016 K)]2143.341: [Rescan
>>>> (parallel) , 0.0038660 secs]2143.345: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 56676K(158108K), 0.0039760 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 2143.345: [CMS-concurrent-sweep-start]
>>>> 2143.349: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2143.349: [CMS-concurrent-reset-start]
>>>> 2143.358: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2145.358: [GC [1 CMS-initial-mark: 24053K(40092K)] 56805K(158108K),
>>>> 0.0043390 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2145.362: [CMS-concurrent-mark-start]
>>>> 2145.379: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 2145.379: [CMS-concurrent-preclean-start]
>>>> 2145.379: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2145.379: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2150.393:
>>>> [CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 2150.393: [GC[YG occupancy: 33073 K (118016 K)]2150.393: [Rescan
>>>> (parallel) , 0.0038190 secs]2150.397: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 57126K(158108K), 0.0039210 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 2150.397: [CMS-concurrent-sweep-start]
>>>> 2150.400: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2150.400: [CMS-concurrent-reset-start]
>>>> 2150.410: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2152.410: [GC [1 CMS-initial-mark: 24053K(40092K)] 57254K(158108K),
>>>> 0.0044080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2152.415: [CMS-concurrent-mark-start]
>>>> 2152.431: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 2152.431: [CMS-concurrent-preclean-start]
>>>> 2152.432: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2152.432: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2157.447:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.01, real=5.02 secs]
>>>> 2157.447: [GC[YG occupancy: 33522 K (118016 K)]2157.447: [Rescan
>>>> (parallel) , 0.0038130 secs]2157.451: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 57575K(158108K), 0.0039160 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>>> 2157.451: [CMS-concurrent-sweep-start]
>>>> 2157.454: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 2157.454: [CMS-concurrent-reset-start]
>>>> 2157.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 2159.464: [GC [1 CMS-initial-mark: 24053K(40092K)] 57707K(158108K),
>>>> 0.0045170 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2159.469: [CMS-concurrent-mark-start]
>>>> 2159.483: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>>> sys=0.00, real=0.01 secs]
>>>> 2159.483: [CMS-concurrent-preclean-start]
>>>> 2159.483: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2159.483: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2164.491:
>>>> [CMS-concurrent-abortable-preclean: 0.111/5.007 secs] [Times:
>>>> user=0.12 sys=0.00, real=5.01 secs]
>>>> 2164.491: [GC[YG occupancy: 34293 K (118016 K)]2164.491: [Rescan
>>>> (parallel) , 0.0052070 secs]2164.496: [weak refs processing, 0.0000120
>>>> secs] [1 CMS-remark: 24053K(40092K)] 58347K(158108K), 0.0053130 secs]
>>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>>> 2164.496: [CMS-concurrent-sweep-start]
>>>> 2164.500: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2164.500: [CMS-concurrent-reset-start]
>>>> 2164.509: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.01, real=0.01 secs]
>>>> 2166.509: [GC [1 CMS-initial-mark: 24053K(40092K)] 58475K(158108K),
>>>> 0.0045900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2166.514: [CMS-concurrent-mark-start]
>>>> 2166.533: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
>>>> sys=0.00, real=0.02 secs]
>>>> 2166.533: [CMS-concurrent-preclean-start]
>>>> 2166.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2166.533: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2171.549:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.02 secs]
>>>> 2171.549: [GC[YG occupancy: 34743 K (118016 K)]2171.549: [Rescan
>>>> (parallel) , 0.0052200 secs]2171.554: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 58796K(158108K), 0.0053210 secs]
>>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>>> 2171.554: [CMS-concurrent-sweep-start]
>>>> 2171.558: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2171.558: [CMS-concurrent-reset-start]
>>>> 2171.567: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2173.567: [GC [1 CMS-initial-mark: 24053K(40092K)] 58924K(158108K),
>>>> 0.0046700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>>> 2173.572: [CMS-concurrent-mark-start]
>>>> 2173.588: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>>> sys=0.00, real=0.02 secs]
>>>> 2173.588: [CMS-concurrent-preclean-start]
>>>> 2173.589: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2173.589: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2178.604:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.10 sys=0.01, real=5.02 secs]
>>>> 2178.605: [GC[YG occupancy: 35192 K (118016 K)]2178.605: [Rescan
>>>> (parallel) , 0.0041460 secs]2178.609: [weak refs processing, 0.0000110
>>>> secs] [1 CMS-remark: 24053K(40092K)] 59245K(158108K), 0.0042450 secs]
>>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>>> 2178.609: [CMS-concurrent-sweep-start]
>>>> 2178.612: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
>>>> sys=0.00, real=0.00 secs]
>>>> 2178.612: [CMS-concurrent-reset-start]
>>>> 2178.622: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>>> sys=0.00, real=0.01 secs]
>>>> 2180.622: [GC [1 CMS-initial-mark: 24053K(40092K)] 59373K(158108K),
>>>> 0.0047200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>>> 2180.627: [CMS-concurrent-mark-start]
>>>> 2180.645: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
>>>> sys=0.00, real=0.02 secs]
>>>> 2180.645: [CMS-concurrent-preclean-start]
>>>> 2180.645: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>>> user=0.00 sys=0.00, real=0.00 secs]
>>>> 2180.645: [CMS-concurrent-abortable-preclean-start]
>>>> CMS: abort preclean due to time 2185.661:
>>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>>> user=0.11 sys=0.00, real=5.01 secs]
>>>> 2185.661: [GC[YG occupancy: 35645 K (118016 K)]2185.661: [Rescan
>>>> (parallel) , 0.0050730 secs]2185.666: [weak refs processing, 0.0000100
>>>> secs] [1 CMS-remark: 24053K(40092K)] 59698K(158108K), 0.0051720 secs]
>>>> [Times: user=0.04 sys=0.01, real=0.01 secs]
>>>> 2185.666: [CMS-concurrent-sweep-start]
>>>> 2185.670: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>>> sys=0.00, real=0.00 secs]
>>>> 2185.670: [CMS-concurrent-reset-start]
>>>> 2185.679: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>>> sys=0.00, real=0.01 secs]
>>>> 2187.679: [GC [1 CMS-initial-mark: 24053K(40092K)] 59826K(158108K),
>>>> 0.0047350 secs]
>>>>
>>>> --
>>>> gregross:)
>>>>
>>>
>>> --
>>>
>>>
>>
>>
>>
>> --
>> gregross:)
>>
>
> --
>
>



-- 
gregross:)

Re: long garbage collecting pause

Posted by Michael Segel <mi...@hotmail.com>.
There's more to it... like setting up the Par New stuff. 

I think it should be detailed in the tuning section. 


On Oct 1, 2012, at 4:05 PM, Greg Ross <gr...@ngmoco.com> wrote:

> Thank, Michael.
> 
> We have hbase.hregion.memstore.mslab.enabled = true but have left the
> chunksize and max.allocation not set so I assume these are at their
> default values.
> 
> Greg
> 
> 
> On Mon, Oct 1, 2012 at 1:51 PM, Michael Segel <mi...@hotmail.com> wrote:
>> Have you implemented MSLABS?
>> 
>> On Oct 1, 2012, at 3:35 PM, Greg Ross <gr...@ngmoco.com> wrote:
>> 
>>> Hi,
>>> 
>>> I'm having difficulty with a mapreduce job that has reducers that read
>>> from and write to HBase, version 0.92.1, r1298924. Row sizes vary
>>> greatly. As do the number of cells, although the number of cells is
>>> typically numbered in the tens, at most. The max cell size is 1MB.
>>> 
>>> I see the following in the logs followed by the region server promptly
>>> shutting down:
>>> 
>>> 2012-10-01 19:08:47,858 [regionserver60020] WARN
>>> org.apache.hadoop.hbase.util.Sleeper: We slept 28970ms instead of
>>> 3000ms, this is likely due to a long garbage collecting pause and it's
>>> usually bad, see
>>> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
>>> 
>>> The full logs, including GC are below.
>>> 
>>> Although new to HBase, I've read up on the likely GC issues and their
>>> remedies. I've implemented the recommended solutions and still to no
>>> avail.
>>> 
>>> Here's what I've tried:
>>> 
>>> (1) increased the RAM to 4G
>>> (2) set -XX:+UseConcMarkSweepGC
>>> (3) set -XX:+UseParNewGC
>>> (4) set -XX:CMSInitiatingOccupancyFraction=N where I've attempted N=[40..70]
>>> (5) I've called context.progress() in the reducer before and after
>>> reading and writing
>>> (6) memstore is enabled
>>> 
>>> Is there anything else that I might have missed?
>>> 
>>> Thanks,
>>> 
>>> Greg
>>> 
>>> 
>>> hbase logs
>>> ========
>>> 
>>> 2012-10-01 19:09:48,293
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/.tmp/d2ee47650b224189b0c27d1c20929c03
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> 2012-10-01 19:09:48,884
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 5 file(s) in U of
>>> orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
>>> into d2ee47650b224189b0c27d1c20929c03, size=723.0m; total size for
>>> store is 723.0m
>>> 2012-10-01 19:09:48,884
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.,
>>> storeName=U, fileCount=5, fileSize=1.4g, priority=2,
>>> time=10631266687564968; duration=35sec
>>> 2012-10-01 19:09:48,886
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>>> 2012-10-01 19:09:48,887
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 5
>>> file(s) in U of
>>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp,
>>> seqid=132201184, totalSize=1.4g
>>> 2012-10-01 19:10:04,191
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp/2e5534fea8b24eaf9cc1e05dea788c01
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> 2012-10-01 19:10:04,868
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 5 file(s) in U of
>>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>>> into 2e5534fea8b24eaf9cc1e05dea788c01, size=626.5m; total size for
>>> store is 626.5m
>>> 2012-10-01 19:10:04,868
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
>>> storeName=U, fileCount=5, fileSize=1.4g, priority=2,
>>> time=10631266696614208; duration=15sec
>>> 2012-10-01 19:14:04,992
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>>> 2012-10-01 19:14:04,993
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp,
>>> seqid=132198830, totalSize=863.8m
>>> 2012-10-01 19:14:19,147
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp/b741f8501ad248418c48262d751f6e86
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/U/b741f8501ad248418c48262d751f6e86
>>> 2012-10-01 19:14:19,381
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>>> into b741f8501ad248418c48262d751f6e86, size=851.4m; total size for
>>> store is 851.4m
>>> 2012-10-01 19:14:19,381
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.,
>>> storeName=U, fileCount=2, fileSize=863.8m, priority=5,
>>> time=10631557965747111; duration=14sec
>>> 2012-10-01 19:14:19,381
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>>> 2012-10-01 19:14:19,381
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>> file(s) in U of
>>> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp,
>>> seqid=132198819, totalSize=496.7m
>>> 2012-10-01 19:14:27,337
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp/78040c736c4149a884a1bdcda9916416
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/U/78040c736c4149a884a1bdcda9916416
>>> 2012-10-01 19:14:27,514
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 3 file(s) in U of
>>> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>>> into 78040c736c4149a884a1bdcda9916416, size=487.5m; total size for
>>> store is 487.5m
>>> 2012-10-01 19:14:27,514
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.,
>>> storeName=U, fileCount=3, fileSize=496.7m, priority=4,
>>> time=10631557966599560; duration=8sec
>>> 2012-10-01 19:14:27,514
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>>> 2012-10-01 19:14:27,514
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>> file(s) in U of
>>> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp,
>>> seqid=132200816, totalSize=521.7m
>>> 2012-10-01 19:14:36,962
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp/0142b8bcdda948c185887358990af6d1
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/U/0142b8bcdda948c185887358990af6d1
>>> 2012-10-01 19:14:37,171
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 3 file(s) in U of
>>> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>>> into 0142b8bcdda948c185887358990af6d1, size=510.7m; total size for
>>> store is 510.7m
>>> 2012-10-01 19:14:37,171
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.,
>>> storeName=U, fileCount=3, fileSize=521.7m, priority=4,
>>> time=10631557967125617; duration=9sec
>>> 2012-10-01 19:14:37,172
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>>> 2012-10-01 19:14:37,172
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>> file(s) in U of
>>> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp,
>>> seqid=132198832, totalSize=565.5m
>>> 2012-10-01 19:14:57,082
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp/44a27dce8df04306908579c22be76786
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/U/44a27dce8df04306908579c22be76786
>>> 2012-10-01 19:14:57,429
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 3 file(s) in U of
>>> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>>> into 44a27dce8df04306908579c22be76786, size=557.7m; total size for
>>> store is 557.7m
>>> 2012-10-01 19:14:57,429
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.,
>>> storeName=U, fileCount=3, fileSize=565.5m, priority=4,
>>> time=10631557967207683; duration=20sec
>>> 2012-10-01 19:14:57,429
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>>> 2012-10-01 19:14:57,430
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>> file(s) in U of
>>> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp,
>>> seqid=132199414, totalSize=845.6m
>>> 2012-10-01 19:16:54,394
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp/771813ba0c87449ebd99d5e7916244f8
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/U/771813ba0c87449ebd99d5e7916244f8
>>> 2012-10-01 19:16:54,636
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 3 file(s) in U of
>>> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>>> into 771813ba0c87449ebd99d5e7916244f8, size=827.3m; total size for
>>> store is 827.3m
>>> 2012-10-01 19:16:54,636
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.,
>>> storeName=U, fileCount=3, fileSize=845.6m, priority=4,
>>> time=10631557967560440; duration=1mins, 57sec
>>> 2012-10-01 19:16:54,636
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>>> 2012-10-01 19:16:54,637
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>> file(s) in U of
>>> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp,
>>> seqid=132198824, totalSize=1012.4m
>>> 2012-10-01 19:17:35,610
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp/771a4124c763468c8dea927cb53887ee
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/U/771a4124c763468c8dea927cb53887ee
>>> 2012-10-01 19:17:35,874
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 3 file(s) in U of
>>> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>>> into 771a4124c763468c8dea927cb53887ee, size=974.0m; total size for
>>> store is 974.0m
>>> 2012-10-01 19:17:35,875
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.,
>>> storeName=U, fileCount=3, fileSize=1012.4m, priority=4,
>>> time=10631557967678796; duration=41sec
>>> 2012-10-01 19:17:35,875
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>>> 2012-10-01 19:17:35,875
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>> file(s) in U of
>>> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp,
>>> seqid=132198815, totalSize=530.5m
>>> 2012-10-01 19:17:47,481
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp/24328f8244f747bf8ba81b74ef2893fa
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/U/24328f8244f747bf8ba81b74ef2893fa
>>> 2012-10-01 19:17:47,741
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 3 file(s) in U of
>>> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>>> into 24328f8244f747bf8ba81b74ef2893fa, size=524.0m; total size for
>>> store is 524.0m
>>> 2012-10-01 19:17:47,741
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.,
>>> storeName=U, fileCount=3, fileSize=530.5m, priority=4,
>>> time=10631557967807915; duration=11sec
>>> 2012-10-01 19:17:47,741
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>>> 2012-10-01 19:17:47,741
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>> file(s) in U of
>>> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp,
>>> seqid=132201190, totalSize=529.3m
>>> 2012-10-01 19:17:58,031
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp/cae48d1b96eb4440a7bcd5fa3b4c070b
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/U/cae48d1b96eb4440a7bcd5fa3b4c070b
>>> 2012-10-01 19:17:58,232
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 3 file(s) in U of
>>> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>>> into cae48d1b96eb4440a7bcd5fa3b4c070b, size=521.3m; total size for
>>> store is 521.3m
>>> 2012-10-01 19:17:58,232
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.,
>>> storeName=U, fileCount=3, fileSize=529.3m, priority=4,
>>> time=10631557967959079; duration=10sec
>>> 2012-10-01 19:17:58,232
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>>> 2012-10-01 19:17:58,232
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>>> file(s) in U of
>>> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp,
>>> seqid=132199205, totalSize=475.2m
>>> 2012-10-01 19:18:06,764
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp/ba51afdc860048b6b2e1047b06fb3b29
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/U/ba51afdc860048b6b2e1047b06fb3b29
>>> 2012-10-01 19:18:07,065
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 3 file(s) in U of
>>> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>>> into ba51afdc860048b6b2e1047b06fb3b29, size=474.5m; total size for
>>> store is 474.5m
>>> 2012-10-01 19:18:07,065
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.,
>>> storeName=U, fileCount=3, fileSize=475.2m, priority=4,
>>> time=10631557968104570; duration=8sec
>>> 2012-10-01 19:18:07,065
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>>> 2012-10-01 19:18:07,065
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp,
>>> seqid=132198822, totalSize=522.5m
>>> 2012-10-01 19:18:18,306
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp/7a0bd16b11f34887b2690e9510071bf0
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/U/7a0bd16b11f34887b2690e9510071bf0
>>> 2012-10-01 19:18:18,439
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>>> into 7a0bd16b11f34887b2690e9510071bf0, size=520.0m; total size for
>>> store is 520.0m
>>> 2012-10-01 19:18:18,440
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.,
>>> storeName=U, fileCount=2, fileSize=522.5m, priority=5,
>>> time=10631557965863914; duration=11sec
>>> 2012-10-01 19:18:18,440
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>>> 2012-10-01 19:18:18,440
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp,
>>> seqid=132198823, totalSize=548.0m
>>> 2012-10-01 19:18:32,288
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp/dcd050acc2e747738a90aebaae8920e4
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/U/dcd050acc2e747738a90aebaae8920e4
>>> 2012-10-01 19:18:32,431
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>>> into dcd050acc2e747738a90aebaae8920e4, size=528.2m; total size for
>>> store is 528.2m
>>> 2012-10-01 19:18:32,431
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.,
>>> storeName=U, fileCount=2, fileSize=548.0m, priority=5,
>>> time=10631557966071838; duration=13sec
>>> 2012-10-01 19:18:32,431
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>>> 2012-10-01 19:18:32,431
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp,
>>> seqid=132199001, totalSize=475.9m
>>> 2012-10-01 19:18:43,154
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp/15a9167cd9754fd4b3674fe732648a03
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/U/15a9167cd9754fd4b3674fe732648a03
>>> 2012-10-01 19:18:43,322
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>>> into 15a9167cd9754fd4b3674fe732648a03, size=475.9m; total size for
>>> store is 475.9m
>>> 2012-10-01 19:18:43,322
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.,
>>> storeName=U, fileCount=2, fileSize=475.9m, priority=5,
>>> time=10631557966273447; duration=10sec
>>> 2012-10-01 19:18:43,322
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>>> 2012-10-01 19:18:43,322
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp,
>>> seqid=132198833, totalSize=824.8m
>>> 2012-10-01 19:19:00,252
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp/bf8da91da0824a909f684c3ecd0ee8da
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/U/bf8da91da0824a909f684c3ecd0ee8da
>>> 2012-10-01 19:19:00,788
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>>> into bf8da91da0824a909f684c3ecd0ee8da, size=803.0m; total size for
>>> store is 803.0m
>>> 2012-10-01 19:19:00,788
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.,
>>> storeName=U, fileCount=2, fileSize=824.8m, priority=5,
>>> time=10631557966382580; duration=17sec
>>> 2012-10-01 19:19:00,788
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>>> 2012-10-01 19:19:00,788
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp,
>>> seqid=132198810, totalSize=565.3m
>>> 2012-10-01 19:19:11,311
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp/5cd2032f48bc4287b8866165dcb6d3e6
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/U/5cd2032f48bc4287b8866165dcb6d3e6
>>> 2012-10-01 19:19:11,504
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>>> into 5cd2032f48bc4287b8866165dcb6d3e6, size=553.5m; total size for
>>> store is 553.5m
>>> 2012-10-01 19:19:11,504
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.,
>>> storeName=U, fileCount=2, fileSize=565.3m, priority=5,
>>> time=10631557966480961; duration=10sec
>>> 2012-10-01 19:19:11,504
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>>> 2012-10-01 19:19:11,504
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp,
>>> seqid=132198825, totalSize=519.6m
>>> 2012-10-01 19:19:22,186
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp/6f29b3b15f1747c196ac9aa79f4835b1
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/U/6f29b3b15f1747c196ac9aa79f4835b1
>>> 2012-10-01 19:19:22,437
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>>> into 6f29b3b15f1747c196ac9aa79f4835b1, size=512.7m; total size for
>>> store is 512.7m
>>> 2012-10-01 19:19:22,437
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.,
>>> storeName=U, fileCount=2, fileSize=519.6m, priority=5,
>>> time=10631557966769107; duration=10sec
>>> 2012-10-01 19:19:22,437
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>>> 2012-10-01 19:19:22,437
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp,
>>> seqid=132198836, totalSize=528.3m
>>> 2012-10-01 19:19:34,752
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp/d836630f7e2b4212848d7e4edc7238f1
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/U/d836630f7e2b4212848d7e4edc7238f1
>>> 2012-10-01 19:19:34,945
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>>> into d836630f7e2b4212848d7e4edc7238f1, size=504.3m; total size for
>>> store is 504.3m
>>> 2012-10-01 19:19:34,945
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.,
>>> storeName=U, fileCount=2, fileSize=528.3m, priority=5,
>>> time=10631557967026388; duration=12sec
>>> 2012-10-01 19:19:34,945
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>>> 2012-10-01 19:19:34,945
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp,
>>> seqid=132198841, totalSize=813.8m
>>> 2012-10-01 19:19:49,303
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp/c70692c971cd4e899957f9d5b189372e
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/U/c70692c971cd4e899957f9d5b189372e
>>> 2012-10-01 19:19:49,428
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>>> into c70692c971cd4e899957f9d5b189372e, size=813.7m; total size for
>>> store is 813.7m
>>> 2012-10-01 19:19:49,428
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.,
>>> storeName=U, fileCount=2, fileSize=813.8m, priority=5,
>>> time=10631557967436197; duration=14sec
>>> 2012-10-01 19:19:49,428
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>>> 2012-10-01 19:19:49,429
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp,
>>> seqid=132198642, totalSize=812.0m
>>> 2012-10-01 19:20:38,718
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp/bf99f97891ed42f7847a11cfb8f46438
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/U/bf99f97891ed42f7847a11cfb8f46438
>>> 2012-10-01 19:20:38,825
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>>> into bf99f97891ed42f7847a11cfb8f46438, size=811.3m; total size for
>>> store is 811.3m
>>> 2012-10-01 19:20:38,825
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.,
>>> storeName=U, fileCount=2, fileSize=812.0m, priority=5,
>>> time=10631557968183922; duration=49sec
>>> 2012-10-01 19:20:38,826
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>>> 2012-10-01 19:20:38,826
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp,
>>> seqid=132198138, totalSize=588.7m
>>> 2012-10-01 19:20:48,274
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp/9f44b7eeab58407ca98bb4ec90126035
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/U/9f44b7eeab58407ca98bb4ec90126035
>>> 2012-10-01 19:20:48,383
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>>> into 9f44b7eeab58407ca98bb4ec90126035, size=573.4m; total size for
>>> store is 573.4m
>>> 2012-10-01 19:20:48,383
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.,
>>> storeName=U, fileCount=2, fileSize=588.7m, priority=5,
>>> time=10631557968302831; duration=9sec
>>> 2012-10-01 19:20:48,383
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>>> 2012-10-01 19:20:48,383
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp,
>>> seqid=132198644, totalSize=870.8m
>>> 2012-10-01 19:21:04,998
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp/920844c25b1847c6ac4b880e8cf1d5b0
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/U/920844c25b1847c6ac4b880e8cf1d5b0
>>> 2012-10-01 19:21:05,107
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>>> into 920844c25b1847c6ac4b880e8cf1d5b0, size=869.0m; total size for
>>> store is 869.0m
>>> 2012-10-01 19:21:05,107
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.,
>>> storeName=U, fileCount=2, fileSize=870.8m, priority=5,
>>> time=10631557968521590; duration=16sec
>>> 2012-10-01 19:21:05,107
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>>> 2012-10-01 19:21:05,107
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp,
>>> seqid=132198622, totalSize=885.3m
>>> 2012-10-01 19:21:27,231
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp/c85d413975d642fc914253bd08f3484f
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/U/c85d413975d642fc914253bd08f3484f
>>> 2012-10-01 19:21:27,791
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>>> into c85d413975d642fc914253bd08f3484f, size=848.3m; total size for
>>> store is 848.3m
>>> 2012-10-01 19:21:27,791
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.,
>>> storeName=U, fileCount=2, fileSize=885.3m, priority=5,
>>> time=10631557968628383; duration=22sec
>>> 2012-10-01 19:21:27,791
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>>> in region orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>>> 2012-10-01 19:21:27,791
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>>> file(s) in U of
>>> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp,
>>> seqid=132198621, totalSize=796.5m
>>> 2012-10-01 19:21:42,374
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp/ce543c630dd142309af6dca2a9ab5786
>>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/U/ce543c630dd142309af6dca2a9ab5786
>>> 2012-10-01 19:21:42,515
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>>> of 2 file(s) in U of
>>> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>>> into ce543c630dd142309af6dca2a9ab5786, size=795.5m; total size for
>>> store is 795.5m
>>> 2012-10-01 19:21:42,516
>>> [regionserver60020-largeCompactions-1348577979539] INFO
>>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>>> completed compaction:
>>> regionName=orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.,
>>> storeName=U, fileCount=2, fileSize=796.5m, priority=5,
>>> time=10631557968713853; duration=14sec
>>> 2012-10-01 19:49:58,159 [ResponseProcessor for block
>>> blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor
>>> exception  for block
>>> blk_5535637699691880681_51616301java.io.EOFException
>>>   at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>   at java.io.DataInputStream.readLong(DataInputStream.java:399)
>>>   at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:120)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2634)
>>> 
>>> 2012-10-01 19:49:58,167 [IPC Server handler 87 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
>>> {"processingtimems":46208,"client":"10.100.102.155:38534","timeRange":[0,9223372036854775807],"starttimems":1349120951956,"responsesize":329939,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00322994","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
>>> 2012-10-01 19:49:58,160
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
>>> not heard from server in 56633ms for sessionid 0x137ec64368509f7,
>>> closing socket connection and attempting reconnect
>>> 2012-10-01 19:49:58,160 [regionserver60020] WARN
>>> org.apache.hadoop.hbase.util.Sleeper: We slept 49116ms instead of
>>> 3000ms, this is likely due to a long garbage collecting pause and it's
>>> usually bad, see
>>> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
>>> 2012-10-01 19:49:58,160
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
>>> not heard from server in 53359ms for sessionid 0x137ec64368509f6,
>>> closing socket connection and attempting reconnect
>>> 2012-10-01 19:49:58,320 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 waiting for responder to exit.
>>> 2012-10-01 19:49:58,380 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>> 2012-10-01 19:49:58,380 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>> 10.100.101.156:50010
>>> 2012-10-01 19:49:59,113 [regionserver60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: Unhandled
>>> exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
>>> rejected; currently processing
>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>>> org.apache.hadoop.hbase.YouAreDeadException:
>>> org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
>>> currently processing
>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>>>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>   at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>   at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
>>>   at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:797)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:688)
>>>   at java.lang.Thread.run(Thread.java:662)
>>> Caused by: org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
>>> currently processing
>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>>>   at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:222)
>>>   at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:148)
>>>   at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:844)
>>>   at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:918)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>   at $Proxy8.regionServerReport(Unknown Source)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:794)
>>>   ... 2 more
>>> 2012-10-01 19:49:59,114 [regionserver60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>> abort: loaded coprocessors are: []
>>> 2012-10-01 19:49:59,397 [IPC Server handler 36 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
>>> {"processingtimems":47521,"client":"10.100.102.176:60221","timeRange":[0,9223372036854775807],"starttimems":1349120951875,"responsesize":699312,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00318223","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
>>> 2012-10-01 19:50:00,355 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
>>> primary datanode 10.100.102.122:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:00,355
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
>>> server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
>>> 2012-10-01 19:50:00,356
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>>> SASL-authenticate because the default JAAS configuration section
>>> 'Client' could not be found. If you are not using SASL, you may ignore
>>> this. On the other hand, if you expected SASL to work, please fix your
>>> JAAS configuration.
>>> 2012-10-01 19:50:00,356 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.122:50010 failed 1 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>>> retry...
>>> 2012-10-01 19:50:00,357
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>>> session
>>> 2012-10-01 19:50:00,358
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>>> server; r-o mode will be unavailable
>>> 2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
>>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
>>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
>>> expired from ZooKeeper, aborting
>>> org.apache.zookeeper.KeeperException$SessionExpiredException:
>>> KeeperErrorCode = Session expired
>>>   at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:374)
>>>   at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:271)
>>>   at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
>>>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
>>> 2012-10-01 19:50:00,359
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
>>> service, session 0x137ec64368509f6 has expired, closing socket
>>> connection
>>> 2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>> abort: loaded coprocessors are: []
>>> 2012-10-01 19:50:00,367 [regionserver60020-EventThread] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>> hdfsBlocksLocalityIndex=97
>>> 2012-10-01 19:50:00,367 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>> hdfsBlocksLocalityIndex=97
>>> 2012-10-01 19:50:00,381
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
>>> server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
>>> 2012-10-01 19:50:00,401 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unhandled
>>> exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
>>> rejected; currently processing
>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>>> 2012-10-01 19:50:00,403
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>>> SASL-authenticate because the default JAAS configuration section
>>> 'Client' could not be found. If you are not using SASL, you may ignore
>>> this. On the other hand, if you expected SASL to work, please fix your
>>> JAAS configuration.
>>> 2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
>>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
>>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
>>> expired from ZooKeeper, aborting
>>> 2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
>>> org.apache.zookeeper.ClientCnxn: EventThread shut down
>>> 2012-10-01 19:50:00,412 [regionserver60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020
>>> 2012-10-01 19:50:00,413
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>>> session
>>> 2012-10-01 19:50:00,413 [IPC Server handler 9 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,413 [IPC Server handler 20 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,413 [IPC Server handler 2 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,413 [IPC Server handler 10 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,413 [IPC Server listener on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on
>>> 60020
>>> 2012-10-01 19:50:00,413 [IPC Server handler 12 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 21 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 13 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 19 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 22 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 11 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.SplitLogWorker: Sending interrupt
>>> to stop the worker thread
>>> 2012-10-01 19:50:00,414 [IPC Server handler 6 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Stopping
>>> infoServer
>>> 2012-10-01 19:50:00,414 [IPC Server handler 0 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 28 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 7 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,413 [IPC Server handler 15 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 5 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 48 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,413 [IPC Server handler 14 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,413 [IPC Server handler 18 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 37 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 47 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 50 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 45 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 36 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 43 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 42 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 38 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 8 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 40 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 34 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 4 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@5fa9b60a,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320394"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.117:56438: output error
>>> 2012-10-01 19:50:00,414 [IPC Server handler 61 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 1768076108943205533:java.io.InterruptedIOException:
>>> Interruped while waiting for IO on channel
>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59104
>>> remote=/10.100.101.156:50010]. 59988 millis timeout left.
>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1243)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020
>>> caught: java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 31 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414
>>> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
>>> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
>>> SplitLogWorker interrupted while waiting for task, exiting:
>>> java.lang.InterruptedException
>>> 2012-10-01 19:50:00,563
>>> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
>>> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
>>> SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
>>> exiting
>>> 2012-10-01 19:50:00,414 [IPC Server handler 1 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 3201413024070455305:java.io.InterruptedIOException:
>>> Interruped while waiting for IO on channel
>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59115
>>> remote=/10.100.101.156:50010]. 59999 millis timeout left.
>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>   at java.io.DataInputStream.readShort(DataInputStream.java:295)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,563 [IPC Server Responder] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
>>> 2012-10-01 19:50:00,414 [IPC Server handler 27 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,414
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>>> server; r-o mode will be unavailable
>>> 2012-10-01 19:50:00,414 [IPC Server handler 55 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block -2144655386884254555:java.io.InterruptedIOException:
>>> Interruped while waiting for IO on channel
>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59108
>>> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1350)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,649
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
>>> service, session 0x137ec64368509f7 has expired, closing socket
>>> connection
>>> 2012-10-01 19:50:00,414 [IPC Server handler 39 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.173:50010 for file
>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
>>> for block -2100467641393578191:java.io.InterruptedIOException:
>>> Interruped while waiting for IO on channel
>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:48825
>>> remote=/10.100.102.173:50010]. 60000 millis timeout left.
>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,414 [IPC Server handler 26 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block -5183799322211896791:java.io.InterruptedIOException:
>>> Interruped while waiting for IO on channel
>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59078
>>> remote=/10.100.101.156:50010]. 59949 millis timeout left.
>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,414 [IPC Server handler 85 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block -5183799322211896791:java.io.InterruptedIOException:
>>> Interruped while waiting for IO on channel
>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59082
>>> remote=/10.100.101.156:50010]. 59950 millis timeout left.
>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,414 [IPC Server handler 57 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block -1763662403960466408:java.io.InterruptedIOException:
>>> Interruped while waiting for IO on channel
>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59116
>>> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>   at java.io.DataInputStream.readShort(DataInputStream.java:295)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,649 [IPC Server handler 79 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,649 [regionserver60020-EventThread] INFO
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>>> This client just lost it's session with ZooKeeper, trying to
>>> reconnect.
>>> 2012-10-01 19:50:00,649 [IPC Server handler 89 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,649 [PRI IPC Server handler 3 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,649 [PRI IPC Server handler 0 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,700 [IPC Server handler 56 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,649 [PRI IPC Server handler 2 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,701 [IPC Server handler 54 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,563 [IPC Server Responder] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
>>> 2012-10-01 19:50:00,701 [IPC Server handler 71 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,701 [IPC Server handler 79 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.193:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,563 [IPC Server handler 16 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 9 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,563 [IPC Server handler 17 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,415 [IPC Server handler 60 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@7eee7b96,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321525"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.125:49043: output error
>>> 2012-10-01 19:50:00,704 [IPC Server handler 3 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 6550563574061266649:java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,717 [IPC Server handler 49 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,717 [IPC Server handler 94 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,701 [IPC Server handler 83 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 1 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 7 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,701 [IPC Server handler 82 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 6 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,719 [IPC Server handler 16 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.107:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,701 [IPC Server handler 74 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,719 [IPC Server handler 86 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020
>>> caught: java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 5 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,701 [regionserver60020] INFO org.mortbay.log:
>>> Stopped SelectChannelConnector@0.0.0.0:60030
>>> 2012-10-01 19:50:00,722 [IPC Server handler 35 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,722 [IPC Server handler 16 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.133:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,722 [IPC Server handler 98 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,701 [IPC Server handler 68 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,701 [IPC Server handler 64 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,673 [IPC Server handler 33 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,736 [IPC Server handler 76 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,673 [regionserver60020-EventThread] INFO
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>>> Trying to reconnect to zookeeper
>>> 2012-10-01 19:50:00,736 [IPC Server handler 84 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,736 [IPC Server handler 95 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,736 [IPC Server handler 75 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,736 [IPC Server handler 92 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,736 [IPC Server handler 88 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,736 [IPC Server handler 67 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,736 [IPC Server handler 30 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,736 [IPC Server handler 80 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,736 [IPC Server handler 62 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,722 [IPC Server handler 52 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,722 [IPC Server handler 32 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,722 [IPC Server handler 97 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,722 [IPC Server handler 96 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,722 [IPC Server handler 93 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,722 [IPC Server handler 73 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,722 [IPC Server handler 17 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.47:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,722 [IPC Server handler 87 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,721 [IPC Server handler 81 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,721 [IPC Server handler 90 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,721 [IPC Server handler 59 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>> for block -9081461281107361903:java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,719 [IPC Server handler 65 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,721 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 8387547514055202675:java.nio.channels.ClosedChannelException
>>>   at java.nio.channels.spi.AbstractSelectableChannel.configureBlocking(AbstractSelectableChannel.java:252)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.<init>(SocketIOWithTimeout.java:66)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.<init>(SocketInputStream.java:50)
>>>   at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:73)
>>>   at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:91)
>>>   at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:323)
>>>   at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:299)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1474)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,721 [IPC Server handler 66 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 1768076108943205533:java.io.InterruptedIOException:
>>> Interruped while waiting for IO on channel
>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59074
>>> remote=/10.100.101.156:50010]. 59947 millis timeout left.
>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,811 [IPC Server handler 59 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.135:50010 for file
>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>> for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,719 [IPC Server handler 58 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.io.InterruptedIOException:
>>> Interruped while waiting for IO on channel
>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59107
>>> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,831 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.153:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,719 [IPC Server handler 39 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.144:50010 for file
>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
>>> for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,719 [IPC Server handler 26 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.138:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,852 [IPC Server handler 66 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.174:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,719 [IPC Server handler 41 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>> for block 5946486101046455013:java.io.InterruptedIOException:
>>> Interruped while waiting for IO on channel
>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59091
>>> remote=/10.100.101.156:50010]. 59953 millis timeout left.
>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,719 [IPC Server handler 1 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.148:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,719 [IPC Server handler 53 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,719 [IPC Server handler 79 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.154:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,719 [IPC Server handler 89 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.47:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,719 [IPC Server handler 46 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 4946845190538507957:java.io.InterruptedIOException:
>>> Interruped while waiting for IO on channel
>>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59113
>>> remote=/10.100.101.156:50010]. 59999 millis timeout left.
>>>   at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>   at java.io.DataInputStream.readShort(DataInputStream.java:295)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,895 [IPC Server handler 26 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.139:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,701 [IPC Server handler 91 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,717 [IPC Server handler 3 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.114:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 6550563574061266649:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.134:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,717 [PRI IPC Server handler 4 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,717 [IPC Server handler 77 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,717 [PRI IPC Server handler 8 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,717 [IPC Server handler 99 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,717 [IPC Server handler 85 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.138:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,717 [IPC Server handler 51 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,717 [IPC Server handler 57 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.138:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,717 [IPC Server handler 55 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.180:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,717 [IPC Server handler 70 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,717 [IPC Server handler 61 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.174:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.173:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,705 [IPC Server handler 23 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,705 [IPC Server handler 24 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 2851854722247682142:java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,704 [IPC Server handler 25 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>>   at java.io.DataInputStream.read(DataInputStream.java:132)
>>>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>>   at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>>   at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>>   at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>>   at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>>   at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.97:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.144:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,904 [regionserver60020-EventThread] INFO
>>> org.apache.zookeeper.ZooKeeper: Initiating client connection,
>>> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
>>> sessionTimeout=180000 watcher=hconnection
>>> 2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.72:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_-2144655386884254555_51616216 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,904 [IPC Server handler 57 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.144:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,901 [IPC Server handler 85 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_-5183799322211896791_51616591 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_5937357897784147544_51616546 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,899 [IPC Server handler 3 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_6550563574061266649_51616152 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,896 [IPC Server handler 46 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_4946845190538507957_51616628 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,896 [IPC Server handler 41 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.133:50010 for file
>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,896 [IPC Server handler 26 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_-5183799322211896791_51616591 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,896 [IPC Server handler 1 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.175:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,895 [IPC Server handler 66 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.97:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,894 [IPC Server handler 39 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.151:50010 for file
>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
>>> for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,894 [IPC Server handler 79 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_2209451090614340242_51616188 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,857 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.101:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,856 [IPC Server handler 58 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.134:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,839 [IPC Server handler 59 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.194:50010 for file
>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>> for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,811 [IPC Server handler 16 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_4946845190538507957_51616628 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,787 [IPC Server handler 90 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.134:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,780 [IPC Server handler 17 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.134:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,736 [IPC Server handler 63 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,736 [IPC Server handler 72 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,736 [IPC Server handler 78 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
>>> exiting
>>> 2012-10-01 19:50:00,906 [IPC Server handler 59 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_-9081461281107361903_51616031 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,906 [IPC Server handler 39 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_-2100467641393578191_51531005 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,906 [IPC Server handler 41 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.145:50010 for file
>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,905 [IPC Server handler 57 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_-1763662403960466408_51616605 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.162:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_-1763662403960466408_51616605 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,904 [IPC Server handler 24 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.72:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_1768076108943205533_51616106 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:00,941 [regionserver60020-SendThread()] INFO
>>> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
>>> /10.100.102.197:2181
>>> 2012-10-01 19:50:00,941 [regionserver60020-EventThread] INFO
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
>>> of this process is 20776@data3024.ngpipes.milp.ngmoco.com
>>> 2012-10-01 19:50:00,942
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>>> SASL-authenticate because the default JAAS configuration section
>>> 'Client' could not be found. If you are not using SASL, you may ignore
>>> this. On the other hand, if you expected SASL to work, please fix your
>>> JAAS configuration.
>>> 2012-10-01 19:50:00,943
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>>> session
>>> 2012-10-01 19:50:00,962
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>>> server; r-o mode will be unavailable
>>> 2012-10-01 19:50:00,962
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
>>> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
>>> sessionid = 0x137ec64373dd4b3, negotiated timeout = 40000
>>> 2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>>> Reconnected successfully. This disconnect could have been caused by a
>>> network partition or a long-running GC pause, either way it's
>>> recommended that you verify your environment.
>>> 2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
>>> org.apache.zookeeper.ClientCnxn: EventThread shut down
>>> 2012-10-01 19:50:01,018 [IPC Server handler 41 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,018 [IPC Server handler 24 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.133:50010 for file
>>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>>   at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_5946486101046455013_51616031 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:01,020 [IPC Server handler 25 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.162:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,021 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,023 [IPC Server handler 90 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.47:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,023 [IPC Server handler 17 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.47:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,024 [IPC Server handler 66 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.174:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,024 [IPC Server handler 61 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@20c6e4bc,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321393"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.118:57165: output error
>>> 2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:01,038 [IPC Server handler 58 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.134:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
>>> exiting
>>> 2012-10-01 19:50:01,038 [IPC Server handler 1 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.148:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.97:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.153:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_1768076108943205533_51616106 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.102.101:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,041 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.156:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,042 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.153:50010 for file
>>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,044 [IPC Server handler 1 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>>> /10.100.101.175:50010 for file
>>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 
>>> 2012-10-01 19:50:01,090 [IPC Server handler 29 on 60020] ERROR
>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
>>> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03,
>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>> [cacheCompressed=false],
>>> firstKey=00321084/U:BAHAMUTIOS_1/1348883706322/Put,
>>> lastKey=00324324/U:user/1348900694793/Put, avgKeyLen=31,
>>> avgValueLen=125185, entries=6053, length=758129544,
>>> cur=00321312/U:KINGDOMSQUESTSIPAD_2/1349024761759/Put/vlen=460950]
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Could not obtain block:
>>> blk_8387547514055202675_51616042
>>> file=/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   ... 17 more
>>> 2012-10-01 19:50:01,091 [IPC Server handler 24 on 60020] ERROR
>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>> [cacheCompressed=false],
>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>> avgValueLen=89140, entries=7365, length=656954017,
>>> cur=00318964/U:user/1349118541276/Put/vlen=311046]
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Could not obtain block:
>>> blk_2851854722247682142_51616579
>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   ... 14 more
>>> 2012-10-01 19:50:01,091 [IPC Server handler 1 on 60020] ERROR
>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>> [cacheCompressed=false],
>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>> avgValueLen=89140, entries=7365, length=656954017,
>>> cur=0032027/U:KINGDOMSQUESTS_10/1349118531396/Put/vlen=401149]
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Could not obtain block:
>>> blk_3201413024070455305_51616611
>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   ... 14 more
>>> 2012-10-01 19:50:01,091 [IPC Server handler 25 on 60020] ERROR
>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>> [cacheCompressed=false],
>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>> avgValueLen=89140, entries=7365, length=656954017,
>>> cur=00319173/U:TINYTOWERANDROID_3/1349024232716/Put/vlen=129419]
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Could not obtain block:
>>> blk_2851854722247682142_51616579
>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   ... 14 more
>>> 2012-10-01 19:50:01,091 [IPC Server handler 90 on 60020] ERROR
>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>> [cacheCompressed=false],
>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>> avgValueLen=89140, entries=7365, length=656954017,
>>> cur=00316914/U:PETCAT_2/1349118542022/Put/vlen=499140]
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Could not obtain block:
>>> blk_5937357897784147544_51616546
>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   ... 14 more
>>> 2012-10-01 19:50:01,091 [IPC Server handler 17 on 60020] ERROR
>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
>>> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>> [cacheCompressed=false],
>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>> avgValueLen=89140, entries=7365, length=656954017,
>>> cur=00317054/U:BAHAMUTIOS_4/1348869430278/Put/vlen=104012]
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Could not obtain block:
>>> blk_5937357897784147544_51616546
>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   ... 17 more
>>> 2012-10-01 19:50:01,091 [IPC Server handler 58 on 60020] ERROR
>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>> [cacheCompressed=false],
>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>> avgValueLen=89140, entries=7365, length=656954017,
>>> cur=00316983/U:TINYTOWERANDROID_1/1349118439250/Put/vlen=417924]
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Could not obtain block:
>>> blk_5937357897784147544_51616546
>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>>   ... 14 more
>>> 2012-10-01 19:50:01,091 [IPC Server handler 89 on 60020] ERROR
>>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>>> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
>>> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>>> [cacheCompressed=false],
>>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>>> avgValueLen=89140, entries=7365, length=656954017,
>>> cur=00317043/U:BAHAMUTANDROID_7/1348968079952/Put/vlen=419212]
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>>   at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>>   at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Could not obtain block:
>>> blk_5937357897784147544_51616546
>>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>>   at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>>   ... 17 more
>>> 2012-10-01 19:50:01,094 [IPC Server handler 58 on 60020] WARN
>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>> server
>>> java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 2012-10-01 19:50:01,094 [IPC Server handler 17 on 60020] WARN
>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>> server
>>> java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 2012-10-01 19:50:01,093 [IPC Server handler 90 on 60020] WARN
>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>> server
>>> java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 2012-10-01 19:50:01,093 [IPC Server handler 25 on 60020] WARN
>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>> server
>>> java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 2012-10-01 19:50:01,092 [IPC Server handler 1 on 60020] WARN
>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>> server
>>> java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 2012-10-01 19:50:01,092 [IPC Server handler 24 on 60020] WARN
>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>> server
>>> java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 2012-10-01 19:50:01,091 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>>> server
>>> java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 2012-10-01 19:50:01,095 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>> 2012-10-01 19:50:01,097 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>> 10.100.101.156:50010
>>> 2012-10-01 19:50:01,115 [IPC Server handler 39 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@2743ecf8,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00390925"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.122:51758: output error
>>> 2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
>>> exiting
>>> 2012-10-01 19:50:01,151 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
>>> primary datanode 10.100.102.122:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:01,151 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.122:50010 failed 2 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>>> retry...
>>> 2012-10-01 19:50:01,153 [IPC Server handler 89 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@7137feec,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317043"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.68:55302: output error
>>> 2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
>>> exiting
>>> 2012-10-01 19:50:01,156 [IPC Server handler 66 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@6b9a9eba,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321504"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.176:32793: output error
>>> 2012-10-01 19:50:01,157 [IPC Server handler 66 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:01,158 [IPC Server handler 66 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
>>> exiting
>>> 2012-10-01 19:50:01,159 [IPC Server handler 41 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@586761c,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00391525"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.155:39850: output error
>>> 2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
>>> exiting
>>> 2012-10-01 19:50:01,216 [regionserver60020.compactionChecker] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer$CompactionChecker:
>>> regionserver60020.compactionChecker exiting
>>> 2012-10-01 19:50:01,216 [regionserver60020.logRoller] INFO
>>> org.apache.hadoop.hbase.regionserver.LogRoller: LogRoller exiting.
>>> 2012-10-01 19:50:01,216 [regionserver60020.cacheFlusher] INFO
>>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
>>> regionserver60020.cacheFlusher exiting
>>> 2012-10-01 19:50:01,217 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: aborting server
>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
>>> 2012-10-01 19:50:01,218 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>>> Closed zookeeper sessionid=0x137ec64373dd4b3
>>> 2012-10-01 19:50:01,270
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,24294294,1349027918385.068e6f4f7b8a81fb21e49fe3ac47f262.
>>> 2012-10-01 19:50:01,271
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96510144,1348960969795.fe2a133a17d09a65a6b0d4fb60e6e051.
>>> 2012-10-01 19:50:01,272
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56499174,1349027424070.7f767ca333bef3dcdacc9a6c673a8350.
>>> 2012-10-01 19:50:01,273
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96515494,1348960969795.8ab4e1d9f4e4c388f3f4f18eec637e8a.
>>> 2012-10-01 19:50:01,273
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,98395724,1348969940123.08188cc246bf752c17cfe57f99970924.
>>> 2012-10-01 19:50:01,274
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>>> 2012-10-01 19:50:01,275
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56604984,1348940650040.14639a082062e98abfea8ae3fff5d2c7.
>>> 2012-10-01 19:50:01,275
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56880144,1348969971950.ece85a086a310aacc2da259a3303e67e.
>>> 2012-10-01 19:50:01,276
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>>> 2012-10-01 19:50:01,277
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,31267284,1348961229728.fc429276c44f5c274f00168f12128bad.
>>> 2012-10-01 19:50:01,278
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56569824,1348940809479.9808dac5b895fc9b8f9892c4b72b3804.
>>> 2012-10-01 19:50:01,279
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56425354,1349031095620.e4965f2e57729ff9537986da3e19258c.
>>> 2012-10-01 19:50:01,280
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96504305,1348964001164.77f75cf8ba76ebc4417d49f019317d0a.
>>> 2012-10-01 19:50:01,280
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,60743825,1348962513777.f377f704db5f0d000e36003338e017b1.
>>> 2012-10-01 19:50:01,283
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,09603014,1349026790546.d634bfe659bdf2f45ec89e53d2d38791.
>>> 2012-10-01 19:50:01,283
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,31274021,1348961229728.e93382b458a84c22f2e5aeb9efa737b5.
>>> 2012-10-01 19:50:01,285
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56462454,1348982699951.a2dafbd054bf65aa6f558dc9a2d839a1.
>>> 2012-10-01 19:50:01,286
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> Orwell,48814673,1348270987327.29818ea19d62126d5616a7ba7d7dae21.
>>> 2012-10-01 19:50:01,288
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56610954,1348940650040.3609c1bfc2be6936577b6be493e7e8d9.
>>> 2012-10-01 19:50:01,289
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>>> 2012-10-01 19:50:01,289
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,05205763,1348941089603.957ea0e428ba6ff21174ecdda96f9fdc.
>>> 2012-10-01 19:50:01,289
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56349615,1348941138879.dfabbd25c59fd6c34a58d9eacf4c096f.
>>> 2012-10-01 19:50:01,292
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56503505,1349027424070.129160a78f13c17cc9ea16ff3757cda9.
>>> 2012-10-01 19:50:01,292
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,91248264,1348942310344.a93982b8f91f260814885bc0afb4fbb9.
>>> 2012-10-01 19:50:01,293
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,98646724,1348980566403.a4f2a16d1278ad1246068646c4886502.
>>> 2012-10-01 19:50:01,293
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56454594,1348982903997.7107c6a1b2117fb59f68210ce82f2cc9.
>>> 2012-10-01 19:50:01,294
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56564144,1348940809479.636092bb3ec2615b115257080427d091.
>>> 2012-10-01 19:50:01,295
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_user_events,06252594,1348582793143.499f0a0f4704afa873c83f141f5e0324.
>>> 2012-10-01 19:50:01,296
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56617164,1348941287729.3992a80a6648ab62753b4998331dcfdf.
>>> 2012-10-01 19:50:01,296
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,98390944,1348969940123.af160e450632411818fa8d01b2c2ed0b.
>>> 2012-10-01 19:50:01,297
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56703743,1348941223663.5cc2fcb82080dbf14956466c31f1d27c.
>>> 2012-10-01 19:50:01,297
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>>> 2012-10-01 19:50:01,298
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56693584,1348942631318.f01b179c1fad1f18b97b37fc8f730898.
>>> 2012-10-01 19:50:01,299
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_user_events,12140615,1348582250428.7822f7f5ceea852b04b586fdf34debff.
>>> 2012-10-01 19:50:01,300
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>>> 2012-10-01 19:50:01,300
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96420705,1348942597601.a063e06eb840ee49bb88474ee8e22160.
>>> 2012-10-01 19:50:01,300
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>>> 2012-10-01 19:50:01,300
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96432674,1348961425148.1a793cf2137b9599193a1e2d5d9749c5.
>>> 2012-10-01 19:50:01,302
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>>> 2012-10-01 19:50:01,303
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,44371574,1348961840615.00f5b4710a43f2ee75d324bebb054323.
>>> 2012-10-01 19:50:01,304
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,562fc921,1348941189517.cff261c585416844113f232960c8d6b4.
>>> 2012-10-01 19:50:01,304
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56323831,1348941216581.0b0f3bdb03ce9e4f58156a4143018e0e.
>>> 2012-10-01 19:50:01,305
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56480194,1349028080664.03a7046ffcec7e1f19cdb2f9890a353e.
>>> 2012-10-01 19:50:01,306
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56418294,1348940288044.c872be05981c047e8c1ee4765b92a74d.
>>> 2012-10-01 19:50:01,306
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,53590305,1348940776419.4c98d7846622f2d8dad4e998dae81d2b.
>>> 2012-10-01 19:50:01,307
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96445963,1348942353563.66a0f602720191bf21a1dfd12eec4a35.
>>> 2012-10-01 19:50:01,307
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>>> 2012-10-01 19:50:01,307
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56305294,1348941189517.20f67941294c259e2273d3e0b7ae5198.
>>> 2012-10-01 19:50:01,308
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56516115,1348981132325.0f753cb87c1163d95d9d10077d6308db.
>>> 2012-10-01 19:50:01,309
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56796924,1348941269761.843e0aee0b15d67b810c7b3fe5a2dda7.
>>> 2012-10-01 19:50:01,309
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56440004,1348941150045.7033cb81a66e405d7bf45cd55ab010e3.
>>> 2012-10-01 19:50:01,309
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56317864,1348941124299.0de45283aa626fc83b2c026e1dd8bfec.
>>> 2012-10-01 19:50:01,310
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56809673,1348941834500.08244d4ed5f7fdf6d9ac9c73fbfd3947.
>>> 2012-10-01 19:50:01,310
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56894864,1348970959541.fc19a6ffe18f29203369d32ad1b102ce.
>>> 2012-10-01 19:50:01,311
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56382491,1348940876960.2392137bf0f4cb695c08c0fb22ce5294.
>>> 2012-10-01 19:50:01,312
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,95128264,1349026585563.5dc569af8afe0a84006b80612c15007f.
>>> 2012-10-01 19:50:01,312
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,5631146,1348941124299.b7c10be9855b5e8ba3a76852920627f9.
>>> 2012-10-01 19:50:01,312
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56710424,1348940462668.a370c149c232ebf4427e070eb28079bc.
>>> 2012-10-01 19:50:01,314 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Session: 0x137ec64373dd4b3 closed
>>> 2012-10-01 19:50:01,314 [regionserver60020-EventThread] INFO
>>> org.apache.zookeeper.ClientCnxn: EventThread shut down
>>> 2012-10-01 19:50:01,314 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 78
>>> regions to close
>>> 2012-10-01 19:50:01,317
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96497834,1348964001164.0b12f37b74b2124ef9f27d1ef0ebb17a.
>>> 2012-10-01 19:50:01,318
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56507574,1349027965795.79113c51d318a11286b39397ebbfdf04.
>>> 2012-10-01 19:50:01,319
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,24297525,1349027918385.047533f3d801709a26c895a01dcc1a73.
>>> 2012-10-01 19:50:01,320
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96439694,1348961425148.038e0e43a6e56760e4daae6f34bfc607.
>>> 2012-10-01 19:50:01,320
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,82811715,1348904784424.88fae4279f9806bef745d90f7ad37241.
>>> 2012-10-01 19:50:01,321
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56699434,1348941223663.ef3ccf0af60ee87450806b393f89cb6e.
>>> 2012-10-01 19:50:01,321
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>>> 2012-10-01 19:50:01,322
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>>> 2012-10-01 19:50:01,322
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>>> 2012-10-01 19:50:01,323
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56465563,1348982699951.f34a29c0c4fc32e753d12db996ccc995.
>>> 2012-10-01 19:50:01,324
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56450734,1349027937173.c70110b3573a48299853117c4287c7be.
>>> 2012-10-01 19:50:01,325
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56361984,1349029457686.6c8d6974741e59df971da91c7355de1c.
>>> 2012-10-01 19:50:01,327
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56814705,1348962077056.69fd74167a3c5c2961e45d339b962ca9.
>>> 2012-10-01 19:50:01,327
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,00389105,1348978080963.6463149a16179d4e44c19bb49e4b4a81.
>>> 2012-10-01 19:50:01,329
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56558944,1348940893836.03bd1c0532949ec115ca8d5215dbb22f.
>>> 2012-10-01 19:50:01,330 [IPC Server handler 59 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@112ba2bf,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00392783"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.135:34935: output error
>>> 2012-10-01 19:50:01,330
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,5658955,1349027142822.e65d0c1f452cb41d47ad08560c653607.
>>> 2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:01,331
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56402364,1349049689267.27b452f3bcce0815b7bf92370cbb51de.
>>> 2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
>>> exiting
>>> 2012-10-01 19:50:01,332
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96426544,1348942597601.addf704f99dd1b2e07b3eff505e2c811.
>>> 2012-10-01 19:50:01,333
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,60414161,1348962852909.c6b1b21f00bbeef8648c4b9b3d28b49a.
>>> 2012-10-01 19:50:01,333
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56552794,1348940893836.5314886f88f6576e127757faa25cef7c.
>>> 2012-10-01 19:50:01,335
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56910924,1348962040261.fdedae86206fc091a72dde52a3d0d0b4.
>>> 2012-10-01 19:50:01,335
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56720084,1349029064698.ee5cb00ab358be0d2d36c59189da32f8.
>>> 2012-10-01 19:50:01,336
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56624533,1348941287729.6121fce2c31d4754b4ad4e855d85b501.
>>> 2012-10-01 19:50:01,336
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56899934,1348970959541.f34f01dd65e293cb6ab13de17ac91eec.
>>> 2012-10-01 19:50:01,337
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>>> 2012-10-01 19:50:01,337
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56405923,1349049689267.bb4be5396608abeff803400cdd2408f4.
>>> 2012-10-01 19:50:01,338
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56364924,1349029457686.1e1c09b6eb734d8ad48ea0b4fa103381.
>>> 2012-10-01 19:50:01,339
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56784073,1348961864297.f01eaf712e59a0bca989ced951caf4f1.
>>> 2012-10-01 19:50:01,340
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56594534,1349027142822.8e67bb85f4906d579d4d278d55efce0b.
>>> 2012-10-01 19:50:01,340
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>>> 2012-10-01 19:50:01,340
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56491525,1349027928183.7bbfb4d39ef4332e17845001191a6ad4.
>>> 2012-10-01 19:50:01,341
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,07123624,1348959804638.c114ec80c6693a284741e220da028736.
>>> 2012-10-01 19:50:01,342
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>>> 2012-10-01 19:50:01,342
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56546534,1348941049708.bde2614732f938db04fdd81ed6dbfcf2.
>>> 2012-10-01 19:50:01,343
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,569054,1348962040261.a7942d7837cd57b68d156d2ce7e3bd5f.
>>> 2012-10-01 19:50:01,343
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56576714,1348982931576.3dd5bf244fb116cf2b6f812fcc39ad2d.
>>> 2012-10-01 19:50:01,344
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,5689007,1348963034009.c4b16ea4d8dbc66c301e67d8e58a7e48.
>>> 2012-10-01 19:50:01,344
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56410784,1349027912141.6de7be1745c329cf9680ad15e9bde594.
>>> 2012-10-01 19:50:01,345
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>>> 2012-10-01 19:50:01,345
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96457954,1348964300132.674a03f0c9866968aabd70ab38a482c0.
>>> 2012-10-01 19:50:01,346
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56483084,1349027988535.de732d7e63ea53331b80255f51fc1a86.
>>> 2012-10-01 19:50:01,347
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56790484,1348941269761.5bcc58c48351de449cc17307ab4bf777.
>>> 2012-10-01 19:50:01,348
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56458293,1348982903997.4f67e6f4949a2ef7f4903f78f54c474e.
>>> 2012-10-01 19:50:01,348
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,95123235,1349026585563.a359eb4cb88d34a529804e50a5affa24.
>>> 2012-10-01 19:50:01,349
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>>> 2012-10-01 19:50:01,350
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56368484,1348941099873.cef2729093a0d7d72b71fac1b25c0a40.
>>> 2012-10-01 19:50:01,350
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,17499894,1349026916228.630196a553f73069b9e568e6912ef0c5.
>>> 2012-10-01 19:50:01,351
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56375315,1348940876960.40cf6dfa370ce7f1fc6c1a59ba2f2191.
>>> 2012-10-01 19:50:01,351
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,95512574,1349009451986.e4d292eb66d16c21ef8ae32254334850.
>>> 2012-10-01 19:50:01,352
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>>> 2012-10-01 19:50:01,352
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>>> 2012-10-01 19:50:01,353
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56432705,1348941150045.07aa626f3703c7b4deaba1263c71894d.
>>> 2012-10-01 19:50:01,353
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,13118725,1349026772953.c0be859d4a4dc2246d764a8aad58fe88.
>>> 2012-10-01 19:50:01,354
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56520814,1348981132325.c2f16fd16f83aa51769abedfe8968bb6.
>>> 2012-10-01 19:50:01,354
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>>> 2012-10-01 19:50:01,355
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56884434,1348963034009.616835869c81659a27eab896f48ae4e1.
>>> 2012-10-01 19:50:01,355
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56476541,1349028080664.341392a325646f24a3d8b8cd27ebda19.
>>> 2012-10-01 19:50:01,357
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56803462,1348941834500.6313b36f1949381d01df977a182e6140.
>>> 2012-10-01 19:50:01,357
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96464524,1348964300132.7a15f1e8e28f713212c516777267c2bf.
>>> 2012-10-01 19:50:01,358
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56875074,1348969971950.3e408e7cb32c9213d184e10bf42837ad.
>>> 2012-10-01 19:50:01,359
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,42862354,1348981565262.7ad46818060be413140cdcc11312119d.
>>> 2012-10-01 19:50:01,359
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56582264,1349028973106.b481b61be387a041a3f259069d5013a6.
>>> 2012-10-01 19:50:01,360
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56579105,1348982931576.1561a22c16263dccb8be07c654b43f2f.
>>> 2012-10-01 19:50:01,360
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56723415,1348946404223.38d992d687ad8925810be4220a732b13.
>>> 2012-10-01 19:50:01,361
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,4285921,1348981565262.7a2cbd8452b9e406eaf1a5ebff64855a.
>>> 2012-10-01 19:50:01,362
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56336394,1348941231573.ca52393a2eabae00a64f65c0b657b95a.
>>> 2012-10-01 19:50:01,363
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,96452715,1348942353563.876edfc6e978879aac42bfc905a09c26.
>>> 2012-10-01 19:50:01,363
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>>> 2012-10-01 19:50:01,364
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56525625,1348941298909.ccf16ed8e761765d2989343c7670e94f.
>>> 2012-10-01 19:50:01,365
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,97578484,1348938848996.98ecacc61ae4c5b3f7a3de64bec0e026.
>>> 2012-10-01 19:50:01,365
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56779025,1348961864297.cc13f0a6f5e632508f2e28a174ef1488.
>>> 2012-10-01 19:50:01,366
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>>> 2012-10-01 19:50:01,366
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_user_events,43323443,1348591057882.8b0ab02c33f275114d89088345f58885.
>>> 2012-10-01 19:50:01,367
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>>> 2012-10-01 19:50:01,367
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,56686234,1348942631318.69270cd5013f8ca984424e508878e428.
>>> 2012-10-01 19:50:01,368
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,98642625,1348980566403.2277d2ef1d53d40d41cd23846619a3f8.
>>> 2012-10-01 19:50:01,524 [IPC Server handler 57 on 60020] INFO
>>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>>> blk_3201413024070455305_51616611 from any node: java.io.IOException:
>>> No live nodes contain current block. Will get new block locations from
>>> namenode and retry...
>>> 2012-10-01 19:50:02,462 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 2
>>> regions to close
>>> 2012-10-01 19:50:02,462 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>> 2012-10-01 19:50:02,462 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>> 10.100.101.156:50010
>>> 2012-10-01 19:50:02,495 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
>>> primary datanode 10.100.102.122:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:02,496 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.122:50010 failed 3 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>>> retry...
>>> 2012-10-01 19:50:02,686 [IPC Server handler 46 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@504b62c6,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320404"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.172:53925: output error
>>> 2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
>>> exiting
>>> 2012-10-01 19:50:02,809 [IPC Server handler 55 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@45f1c31e,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322424"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.178:35016: output error
>>> 2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
>>> exiting
>>> 2012-10-01 19:50:03,496 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>> 2012-10-01 19:50:03,496 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>> 10.100.101.156:50010
>>> 2012-10-01 19:50:03,510 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
>>> primary datanode 10.100.102.122:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:03,510 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.122:50010 failed 4 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>>> retry...
>>> 2012-10-01 19:50:05,299 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>> 2012-10-01 19:50:05,299 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>> 10.100.101.156:50010
>>> 2012-10-01 19:50:05,314 [IPC Server handler 3 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@472aa9fe,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321694"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.176:42371: output error
>>> 2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
>>> exiting
>>> 2012-10-01 19:50:05,329 [IPC Server handler 16 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@42987a12,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320293"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.135:35132: output error
>>> 2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
>>> exiting
>>> 2012-10-01 19:50:05,638 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
>>> primary datanode 10.100.102.122:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:05,638 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.122:50010 failed 5 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>>> retry...
>>> 2012-10-01 19:50:05,641 [IPC Server handler 26 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@a9c09e8,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319505"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.183:60078: output error
>>> 2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
>>> exiting
>>> 2012-10-01 19:50:05,664 [IPC Server handler 57 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@349d7b4,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319915"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.141:58290: output error
>>> 2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
>>> exiting
>>> 2012-10-01 19:50:07,063 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>>> 2012-10-01 19:50:07,063 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>>> 10.100.101.156:50010
>>> 2012-10-01 19:50:07,076 [IPC Server handler 23 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@5ba03734,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319654"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.161:43227: output error
>>> 2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
>>> exiting
>>> 2012-10-01 19:50:07,089 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
>>> primary datanode 10.100.102.122:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:07,090 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.122:50010 failed 6 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010.
>>> Marking primary datanode as bad.
>>> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@3d19e607,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319564"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.82:42779: output error
>>> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
>>> exiting
>>> 2012-10-01 19:50:07,181
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
>>> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@5920511b,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322014"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.88:49489: output error
>>> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
>>> exiting
>>> 2012-10-01 19:50:08,064 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 1
>>> regions to close
>>> 2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
>>> org.apache.hadoop.hbase.regionserver.Leases:
>>> regionserver60020.leaseChecker closing leases
>>> 2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
>>> org.apache.hadoop.hbase.regionserver.Leases:
>>> regionserver60020.leaseChecker closed leases
>>> 2012-10-01 19:50:08,508 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
>>> primary datanode 10.100.101.156:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:08,508 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.101.156:50010 failed 1 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>>> 2012-10-01 19:50:09,652 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
>>> primary datanode 10.100.101.156:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:09,653 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.101.156:50010 failed 2 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>>> 2012-10-01 19:50:10,697 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
>>> primary datanode 10.100.101.156:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:10,697 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.101.156:50010 failed 3 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>>> 2012-10-01 19:50:12,278 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
>>> primary datanode 10.100.101.156:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:12,279 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.101.156:50010 failed 4 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>>> 2012-10-01 19:50:13,294 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
>>> primary datanode 10.100.101.156:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:13,294 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.101.156:50010 failed 5 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>>> 2012-10-01 19:50:14,306 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
>>> primary datanode 10.100.101.156:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:14,306 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.101.156:50010 failed 6 times.  Pipeline was
>>> 10.100.101.156:50010, 10.100.102.88:50010. Marking primary datanode as
>>> bad.
>>> 2012-10-01 19:50:15,317 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
>>> primary datanode 10.100.102.88:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:15,318 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 1 times.  Pipeline was
>>> 10.100.102.88:50010. Will retry...
>>> 2012-10-01 19:50:16,375 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
>>> primary datanode 10.100.102.88:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:16,376 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 2 times.  Pipeline was
>>> 10.100.102.88:50010. Will retry...
>>> 2012-10-01 19:50:17,385 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
>>> primary datanode 10.100.102.88:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:17,385 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 3 times.  Pipeline was
>>> 10.100.102.88:50010. Will retry...
>>> 2012-10-01 19:50:18,395 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
>>> primary datanode 10.100.102.88:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:18,395 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 4 times.  Pipeline was
>>> 10.100.102.88:50010. Will retry...
>>> 2012-10-01 19:50:19,404 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
>>> primary datanode 10.100.102.88:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:19,405 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 5 times.  Pipeline was
>>> 10.100.102.88:50010. Will retry...
>>> 2012-10-01 19:50:20,414 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
>>> primary datanode 10.100.102.88:50010
>>> org.apache.hadoop.ipc.RemoteException:
>>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>>> null.
>>>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>>   at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>>   at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy4.nextGenerationStamp(Unknown Source)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>>   at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>>   at java.security.AccessController.doPrivileged(Native Method)
>>>   at javax.security.auth.Subject.doAs(Subject.java:396)
>>>   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>> 
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy14.recoverBlock(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:20,415 [DataStreamer for file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> block blk_5535637699691880681_51616301] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>> 2012-10-01 19:50:20,415 [IPC Server handler 58 on 60020] ERROR
>>> org.apache.hadoop.hdfs.DFSClient: Exception closing file
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>>> : java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>> java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:20,415 [IPC Server handler 69 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>> java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:20,415 [regionserver60020.logSyncer] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>> java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>> java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] INFO
>>> org.apache.hadoop.fs.FileSystem: Could not cancel cleanup thread,
>>> though no FileSystems are open
>>> 2012-10-01 19:50:20,417 [regionserver60020.logSyncer] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>> java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:20,417 [regionserver60020.logSyncer] FATAL
>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
>>> Requesting close of hlog
>>> java.io.IOException: Reflection
>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>>   at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.lang.reflect.InvocationTargetException
>>>   at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>>   ... 4 more
>>> Caused by: java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:20,418 [regionserver60020.logSyncer] ERROR
>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
>>> requesting close of hlog
>>> java.io.IOException: Reflection
>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>>   at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.lang.reflect.InvocationTargetException
>>>   at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>>   ... 4 more
>>> Caused by: java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
>>> Requesting close of hlog
>>> java.io.IOException: Reflection
>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.append(HLog.java:1033)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:1852)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1723)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3076)
>>>   at sun.reflect.GeneratedMethodAccessor96.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.lang.reflect.InvocationTargetException
>>>   at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>>   ... 11 more
>>> Caused by: java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:20,417 [IPC Server handler 29 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>> System not available
>>> java.io.IOException: File system is not available
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   ... 9 more
>>> Caused by: java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   ... 21 more
>>> 2012-10-01 19:50:20,417 [IPC Server handler 24 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>> System not available
>>> java.io.IOException: File system is not available
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   ... 9 more
>>> Caused by: java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   ... 21 more
>>> 2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>> abort: loaded coprocessors are: []
>>> 2012-10-01 19:50:20,417 [IPC Server handler 1 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>> System not available
>>> java.io.IOException: File system is not available
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   ... 9 more
>>> Caused by: java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   ... 21 more
>>> 2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>> abort: loaded coprocessors are: []
>>> 2012-10-01 19:50:20,417 [IPC Server handler 25 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>> System not available
>>> java.io.IOException: File system is not available
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   ... 9 more
>>> Caused by: java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   ... 21 more
>>> 2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>> abort: loaded coprocessors are: []
>>> 2012-10-01 19:50:20,417 [IPC Server handler 90 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>> System not available
>>> java.io.IOException: File system is not available
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   ... 9 more
>>> Caused by: java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   ... 21 more
>>> 2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>> abort: loaded coprocessors are: []
>>> 2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>> System not available
>>> java.io.IOException: File system is not available
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   ... 9 more
>>> Caused by: java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   ... 21 more
>>> 2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>> abort: loaded coprocessors are: []
>>> 2012-10-01 19:50:20,417 [IPC Server handler 17 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>>> System not available
>>> java.io.IOException: File system is not available
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: java.lang.InterruptedException
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>>   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>   at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>   at $Proxy7.getFileInfo(Unknown Source)
>>>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>>   at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>>   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>>   at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>>   ... 9 more
>>> Caused by: java.lang.InterruptedException
>>>   at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>>   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>>   at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>   at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>>   at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>>   ... 21 more
>>> 2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>> abort: loaded coprocessors are: []
>>> 2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>> hdfsBlocksLocalityIndex=97
>>> 2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>> hdfsBlocksLocalityIndex=97
>>> 2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>> hdfsBlocksLocalityIndex=97
>>> 2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>> hdfsBlocksLocalityIndex=97
>>> 2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>> hdfsBlocksLocalityIndex=97
>>> 2012-10-01 19:50:20,420 [IPC Server handler 69 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: (responseTooSlow):
>>> {"processingtimems":22039,"call":"multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb),
>>> rpc version=1, client version=29,
>>> methodsFingerPrint=54742778","client":"10.100.102.155:39852","starttimems":1349120998380,"queuetimems":0,"class":"HRegionServer","responsesize":0,"method":"multi"}
>>> 2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1575,
>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>> hdfsBlocksLocalityIndex=97
>>> 2012-10-01 19:50:20,420
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING
>>> region server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
>>> Unrecoverable exception while closing region
>>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
>>> still finishing close
>>> java.io.IOException: Filesystem closed
>>>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
>>>   at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
>>>   at java.io.FilterInputStream.close(FilterInputStream.java:155)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
>>>   at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
>>>   at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
>>>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>   at java.lang.Thread.run(Thread.java:662)
>>> 2012-10-01 19:50:20,426
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>> abort: loaded coprocessors are: []
>>> 2012-10-01 19:50:20,419 [IPC Server handler 29 on 60020] FATAL
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>>> abort: loaded coprocessors are: []
>>> 2012-10-01 19:50:20,426
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
>>> metrics: requestsPerSecond=0, numberOfOnlineRegions=136,
>>> numberOfStores=136, numberOfStorefiles=189, storefileIndexSizeMB=15,
>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>> hdfsBlocksLocalityIndex=97
>>> 2012-10-01 19:50:20,426 [IPC Server handler 29 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>>> readRequestsCount=6744201, writeRequestsCount=904280,
>>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
>>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>>> blockCacheCount=5435, blockCacheHitCount=321294212,
>>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>>> hdfsBlocksLocalityIndex=97
>>> 2012-10-01 19:50:20,445 [IPC Server handler 58 on 60020] WARN
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>> fatal error to master
>>> java.lang.reflect.UndeclaredThrowableException
>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Call to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>> local exception: java.nio.channels.ClosedChannelException
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>   ... 11 more
>>> Caused by: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>> 2012-10-01 19:50:20,446 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>> fatal error to master
>>> java.lang.reflect.UndeclaredThrowableException
>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Call to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>> local exception: java.nio.channels.ClosedByInterruptException
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>   ... 11 more
>>> Caused by: java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>>   at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>>>   at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>>>   at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>>>   at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>>>   at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
>>>   ... 12 more
>>> 2012-10-01 19:50:20,447 [IPC Server handler 29 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>> System not available
>>> 2012-10-01 19:50:20,446 [IPC Server handler 58 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>> System not available
>>> 2012-10-01 19:50:20,446 [IPC Server handler 17 on 60020] WARN
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>> fatal error to master
>>> java.lang.reflect.UndeclaredThrowableException
>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:328)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:362)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1045)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:897)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>   ... 11 more
>>> 2012-10-01 19:50:20,448 [IPC Server handler 17 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>> System not available
>>> 2012-10-01 19:50:20,445 [IPC Server handler 1 on 60020] WARN
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>> fatal error to master
>>> java.lang.reflect.UndeclaredThrowableException
>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Call to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>> local exception: java.nio.channels.ClosedChannelException
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>   ... 11 more
>>> Caused by: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>> 2012-10-01 19:50:20,448 [IPC Server handler 1 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>> System not available
>>> 2012-10-01 19:50:20,445
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to
>>> report fatal error to master
>>> java.lang.reflect.UndeclaredThrowableException
>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>   at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:131)
>>>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>   at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.io.IOException: Call to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>> local exception: java.nio.channels.ClosedByInterruptException
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>   ... 7 more
>>> Caused by: java.nio.channels.ClosedByInterruptException
>>>   at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>>   at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>>>   at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>>>   at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>>>   at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>>>   at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> 2012-10-01 19:50:20,450
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
>>> Unrecoverable exception while closing region
>>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
>>> still finishing close
>>> 2012-10-01 19:50:20,445 [IPC Server handler 69 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb), rpc
>>> version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.155:39852: output error
>>> 2012-10-01 19:50:20,445 [IPC Server handler 24 on 60020] WARN
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>> fatal error to master
>>> java.lang.reflect.UndeclaredThrowableException
>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Call to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>> local exception: java.nio.channels.ClosedChannelException
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>   ... 11 more
>>> Caused by: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>> 2012-10-01 19:50:20,451 [IPC Server handler 24 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>> System not available
>>> 2012-10-01 19:50:20,445 [IPC Server handler 90 on 60020] WARN
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>> fatal error to master
>>> java.lang.reflect.UndeclaredThrowableException
>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Call to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>> local exception: java.nio.channels.ClosedChannelException
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>   ... 11 more
>>> Caused by: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>> 2012-10-01 19:50:20,451 [IPC Server handler 90 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>> System not available
>>> 2012-10-01 19:50:20,445 [IPC Server handler 25 on 60020] WARN
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>>> fatal error to master
>>> java.lang.reflect.UndeclaredThrowableException
>>>   at $Proxy8.reportRSFatalError(Unknown Source)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>>   at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>> Caused by: java.io.IOException: Call to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>>> local exception: java.nio.channels.ClosedChannelException
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>>   at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>>   ... 11 more
>>> Caused by: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>>   at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>>   at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>>   at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>>   at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>> 2012-10-01 19:50:20,452 [IPC Server handler 25 on 60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>>> System not available
>>> 2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@5d72e577,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321312"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.184:34111: output error
>>> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@2237178f,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316983"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.188:59581: output error
>>> 2012-10-01 19:50:20,450 [IPC Server handler 69 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
>>> exiting
>>> 2012-10-01 19:50:20,450
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>>> ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable
>>> while processing event M_RS_CLOSE_REGION
>>> java.lang.RuntimeException: java.io.IOException: Filesystem closed
>>>   at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:133)
>>>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>   at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.io.IOException: Filesystem closed
>>>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
>>>   at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
>>>   at java.io.FilterInputStream.close(FilterInputStream.java:155)
>>>   at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
>>>   at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
>>>   at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
>>>   at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
>>>   at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
>>>   ... 4 more
>>> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@573dba6d,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"0032027"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.183:60076: output error
>>> 2012-10-01 19:50:20,452 [IPC Server handler 69 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
>>> exiting
>>> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@4eebbed5,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317054"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.146:40240: output error
>>> 2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:20,453 [IPC Server handler 29 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
>>> exiting
>>> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
>>> exiting
>>> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
>>> exiting
>>> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@4ff0ed4a,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00318964"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.172:53924: output error
>>> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
>>> exiting
>>> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@526abe46,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316914"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.101.184:34110: output error
>>> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
>>> exiting
>>> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>>> get([B@5df20fef,
>>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319173"}),
>>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>>> 10.100.102.146:40243: output error
>>> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020
>>> caught: java.nio.channels.ClosedChannelException
>>>   at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>>   at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>> 
>>> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
>>> exiting
>>> 2012-10-01 19:50:21,066
>>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>>> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>> java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
>>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>>> java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] FATAL
>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
>>> Requesting close of hlog
>>> java.io.IOException: Reflection
>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>>   at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.lang.reflect.InvocationTargetException
>>>   at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>>   ... 4 more
>>> Caused by: java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:21,419 [regionserver60020.logSyncer] ERROR
>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
>>> requesting close of hlog
>>> java.io.IOException: Reflection
>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>>   at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>>   at java.lang.Thread.run(Thread.java:662)
>>> Caused by: java.lang.reflect.InvocationTargetException
>>>   at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>   at java.lang.reflect.Method.invoke(Method.java:597)
>>>   at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>>   ... 4 more
>>> Caused by: java.io.IOException: Error Recovery for block
>>> blk_5535637699691880681_51616301 failed  because recovery from primary
>>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>>> 10.100.102.88:50010. Aborting...
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>>   at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>>> 2012-10-01 19:50:22,066 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; all regions
>>> closed.
>>> 2012-10-01 19:50:22,066 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closing
>>> leases
>>> 2012-10-01 19:50:22,066 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closed
>>> leases
>>> 2012-10-01 19:50:22,082 [regionserver60020] WARN
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed deleting my
>>> ephemeral node
>>> org.apache.zookeeper.KeeperException$SessionExpiredException:
>>> KeeperErrorCode = Session expired for
>>> /hbase/rs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
>>>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
>>>   at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>>>   at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:868)
>>>   at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:107)
>>>   at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:962)
>>>   at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:951)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:964)
>>>   at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:762)
>>>   at java.lang.Thread.run(Thread.java:662)
>>> 2012-10-01 19:50:22,082 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
>>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; zookeeper
>>> connection closed.
>>> 2012-10-01 19:50:22,082 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: regionserver60020
>>> exiting
>>> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
>>> starting; hbase.shutdown.hook=true;
>>> fsShutdownHook=Thread[Thread-5,5,main]
>>> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown
>>> hook
>>> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs
>>> shutdown hook thread.
>>> 2012-10-01 19:50:22,124 [Shutdownhook:regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
>>> finished.
>>> Mon Oct  1 19:54:10 UTC 2012 Starting regionserver on
>>> data3024.ngpipes.milp.ngmoco.com
>>> core file size          (blocks, -c) 0
>>> data seg size           (kbytes, -d) unlimited
>>> scheduling priority             (-e) 20
>>> file size               (blocks, -f) unlimited
>>> pending signals                 (-i) 16382
>>> max locked memory       (kbytes, -l) 64
>>> max memory size         (kbytes, -m) unlimited
>>> open files                      (-n) 32768
>>> pipe size            (512 bytes, -p) 8
>>> POSIX message queues     (bytes, -q) 819200
>>> real-time priority              (-r) 0
>>> stack size              (kbytes, -s) 8192
>>> cpu time               (seconds, -t) unlimited
>>> max user processes              (-u) unlimited
>>> virtual memory          (kbytes, -v) unlimited
>>> file locks                      (-x) unlimited
>>> 2012-10-01 19:54:11,355 [main] INFO
>>> org.apache.hadoop.hbase.util.VersionInfo: HBase 0.92.1
>>> 2012-10-01 19:54:11,356 [main] INFO
>>> org.apache.hadoop.hbase.util.VersionInfo: Subversion
>>> https://svn.apache.org/repos/asf/hbase/branches/0.92 -r 1298924
>>> 2012-10-01 19:54:11,356 [main] INFO
>>> org.apache.hadoop.hbase.util.VersionInfo: Compiled by jenkins on Fri
>>> Mar  9 16:58:34 UTC 2012
>>> 2012-10-01 19:54:11,513 [main] INFO
>>> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java
>>> HotSpot(TM) 64-Bit Server VM, vmVendor=Sun Microsystems Inc.,
>>> vmVersion=20.1-b02
>>> 2012-10-01 19:54:11,513 [main] INFO
>>> org.apache.hadoop.hbase.util.ServerCommandLine:
>>> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx4000m,
>>> -XX:NewSize=128m, -XX:MaxNewSize=128m,
>>> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
>>> -XX:CMSInitiatingOccupancyFraction=75, -verbose:gc,
>>> -XX:+PrintGCDetails, -XX:+PrintGCTimeStamps,
>>> -Xloggc:/data2/hbase_log/gc-hbase.log,
>>> -Dcom.sun.management.jmxremote.authenticate=true,
>>> -Dcom.sun.management.jmxremote.ssl=false,
>>> -Dcom.sun.management.jmxremote.password.file=/home/hadoop/hadoop/conf/jmxremote.password,
>>> -Dcom.sun.management.jmxremote.port=8010,
>>> -Dhbase.log.dir=/data2/hbase_log,
>>> -Dhbase.log.file=hbase-hadoop-regionserver-data3024.ngpipes.milp.ngmoco.com.log,
>>> -Dhbase.home.dir=/home/hadoop/hbase, -Dhbase.id.str=hadoop,
>>> -Dhbase.root.logger=INFO,DRFA,
>>> -Djava.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64]
>>> 2012-10-01 19:54:11,964 [IPC Reader 0 on port 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>> 2012-10-01 19:54:11,967 [IPC Reader 1 on port 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>> 2012-10-01 19:54:11,970 [IPC Reader 2 on port 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>> 2012-10-01 19:54:11,973 [IPC Reader 3 on port 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>> 2012-10-01 19:54:11,976 [IPC Reader 4 on port 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>> 2012-10-01 19:54:11,979 [IPC Reader 5 on port 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>> 2012-10-01 19:54:11,982 [IPC Reader 6 on port 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>> 2012-10-01 19:54:11,985 [IPC Reader 7 on port 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>> 2012-10-01 19:54:11,988 [IPC Reader 8 on port 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>> 2012-10-01 19:54:11,991 [IPC Reader 9 on port 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>>> 2012-10-01 19:54:12,002 [main] INFO
>>> org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics
>>> with hostName=HRegionServer, port=60020
>>> 2012-10-01 19:54:12,081 [main] INFO
>>> org.apache.hadoop.hbase.io.hfile.CacheConfig: Allocating LruBlockCache
>>> with maximum size 996.8m
>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client
>>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48
>>> GMT
>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client
>>> environment:host.name=data3024.ngpipes.milp.ngmoco.com
>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client
>>> environment:java.version=1.6.0_26
>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun
>>> Microsystems Inc.
>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client
>>> environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre
>>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client
>>> environment:java.class.path=/home/hadoop/hbase/conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase:/home/hadoop/hbase/hbase-0.92.1.jar:/home/hadoop/hbase/hbase-0.92.1-tests.jar:/home/hadoop/hbase/lib/activation-1.1.jar:/home/hadoop/hbase/lib/asm-3.1.jar:/home/hadoop/hbase/lib/avro-1.5.3.jar:/home/hadoop/hbase/lib/avro-ipc-1.5.3.jar:/home/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hbase/lib/commons-cli-1.2.jar:/home/hadoop/hbase/lib/commons-codec-1.4.jar:/home/hadoop/hbase/lib/commons-collections-3.2.1.jar:/home/hadoop/hbase/lib/commons-configuration-1.6.jar:/home/hadoop/hbase/lib/commons-digester-1.8.jar:/home/hadoop/hbase/lib/commons-el-1.0.jar:/home/hadoop/hbase/lib/commons-httpclient-3.1.jar:/home/hadoop/hbase/lib/commons-lang-2.5.jar:/home/hadoop/hbase/lib/commons-logging-1.1.1.jar:/home/hadoop/hbase/lib/commons-math-2.1.jar:/home/hadoop/hbase/lib/commons-net-1.4.1.jar:/home/hadoop/hbase/lib/core-3.1.1.jar:/home/hadoop/hbase/lib/guava-r09.jar:/home/hadoop/hbase/lib/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hbase/lib/hadoop-lzo-0.4.9.jar:/home/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/home/hadoop/hbase/lib/httpclient-4.0.1.jar:/home/hadoop/hbase/lib/httpcore-4.0.1.jar:/home/hadoop/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/home/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase/lib/jaxb-api-2.1.jar:/home/hadoop/hbase/lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase/lib/jersey-core-1.4.jar:/home/hadoop/hbase/lib/jersey-json-1.4.jar:/home/hadoop/hbase/lib/jersey-server-1.4.jar:/home/hadoop/hbase/lib/jettison-1.1.jar:/home/hadoop/hbase/lib/jetty-6.1.26.jar:/home/hadoop/hbase/lib/jetty-util-6.1.26.jar:/home/hadoop/hbase/lib/jruby-complete-1.6.5.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/libthrift-0.7.0.jar:/home/hadoop/hbase/lib/log4j-1.2.16.jar:/home/hadoop/hbase/lib/netty-3.2.4.Final.jar:/home/hadoop/hbase/lib/protobuf-java-2.4.0a.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/servlet-api-2.5.jar:/home/hadoop/hbase/lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/home/hadoop/hbase/lib/stax-api-1.0.1.jar:/home/hadoop/hbase/lib/velocity-1.7.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/zookeeper-3.4.3.jar:
>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client
>>> environment:java.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64
>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client
>>> environment:os.version=2.6.35-30-generic
>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client environment:user.name=hadoop
>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client
>>> environment:user.home=/home/hadoop/
>>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Client
>>> environment:user.dir=/home/gregross
>>> 2012-10-01 19:54:12,225 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Initiating client connection,
>>> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
>>> sessionTimeout=180000 watcher=regionserver:60020
>>> 2012-10-01 19:54:12,251 [regionserver60020-SendThread()] INFO
>>> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
>>> /10.100.102.197:2181
>>> 2012-10-01 19:54:12,252 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
>>> of this process is 15403@data3024.ngpipes.milp.ngmoco.com
>>> 2012-10-01 19:54:12,259
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>>> SASL-authenticate because the default JAAS configuration section
>>> 'Client' could not be found. If you are not using SASL, you may ignore
>>> this. On the other hand, if you expected SASL to work, please fix your
>>> JAAS configuration.
>>> 2012-10-01 19:54:12,260
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>>> session
>>> 2012-10-01 19:54:12,272
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>>> server; r-o mode will be unavailable
>>> 2012-10-01 19:54:12,273
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
>>> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
>>> sessionid = 0x137ec64373dd4b5, negotiated timeout = 40000
>>> 2012-10-01 19:54:12,289 [main] INFO
>>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown
>>> hook thread: Shutdownhook:regionserver60020
>>> 2012-10-01 19:54:12,352 [regionserver60020] INFO
>>> org.apache.zookeeper.ZooKeeper: Initiating client connection,
>>> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
>>> sessionTimeout=180000 watcher=hconnection
>>> 2012-10-01 19:54:12,353 [regionserver60020-SendThread()] INFO
>>> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
>>> /10.100.102.197:2181
>>> 2012-10-01 19:54:12,353 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
>>> of this process is 15403@data3024.ngpipes.milp.ngmoco.com
>>> 2012-10-01 19:54:12,354
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>>> SASL-authenticate because the default JAAS configuration section
>>> 'Client' could not be found. If you are not using SASL, you may ignore
>>> this. On the other hand, if you expected SASL to work, please fix your
>>> JAAS configuration.
>>> 2012-10-01 19:54:12,354
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>>> session
>>> 2012-10-01 19:54:12,361
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>>> server; r-o mode will be unavailable
>>> 2012-10-01 19:54:12,361
>>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>>> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
>>> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
>>> sessionid = 0x137ec64373dd4b6, negotiated timeout = 40000
>>> 2012-10-01 19:54:12,384 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
>>> globalMemStoreLimit=1.6g, globalMemStoreLimitLowMark=1.4g,
>>> maxHeap=3.9g
>>> 2012-10-01 19:54:12,400 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Runs every 2hrs,
>>> 46mins, 40sec
>>> 2012-10-01 19:54:12,420 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect
>>> to Master server at
>>> namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915
>>> 2012-10-01 19:54:12,453 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to
>>> master at data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020
>>> 2012-10-01 19:54:12,453 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at
>>> namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915 that we are
>>> up with port=60020, startcode=1349121252040
>>> 2012-10-01 19:54:12,476 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us
>>> hostname to use. Was=data3024.ngpipes.milp.ngmoco.com,
>>> Now=data3024.ngpipes.milp.ngmoco.com
>>> 2012-10-01 19:54:12,568 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.wal.HLog: HLog configuration:
>>> blocksize=64 MB, rollsize=60.8 MB, enabled=true,
>>> optionallogflushinternal=1000ms
>>> 2012-10-01 19:54:12,642 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.wal.HLog:  for
>>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1349121252040/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1349121252040.1349121252569
>>> 2012-10-01 19:54:12,643 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.wal.HLog: Using
>>> getNumCurrentReplicas--HDFS-826
>>> 2012-10-01 19:54:12,651 [regionserver60020] INFO
>>> org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
>>> with processName=RegionServer, sessionId=regionserver60020
>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: MetricsString added: revision
>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser
>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate
>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl
>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: MetricsString added: date
>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision
>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: MetricsString added: user
>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion
>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: MetricsString added: url
>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: MetricsString added: version
>>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: new MBeanInfo
>>> 2012-10-01 19:54:12,657 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.metrics: new MBeanInfo
>>> 2012-10-01 19:54:12,657 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics:
>>> Initialized
>>> 2012-10-01 19:54:12,722 [regionserver60020] INFO org.mortbay.log:
>>> Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>> org.mortbay.log.Slf4jLog
>>> 2012-10-01 19:54:12,774 [regionserver60020] INFO
>>> org.apache.hadoop.http.HttpServer: Added global filtersafety
>>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>>> 2012-10-01 19:54:12,787 [regionserver60020] INFO
>>> org.apache.hadoop.http.HttpServer: Port returned by
>>> webServer.getConnectors()[0].getLocalPort() before open() is -1.
>>> Opening the listener on 60030
>>> 2012-10-01 19:54:12,787 [regionserver60020] INFO
>>> org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned
>>> 60030 webServer.getConnectors()[0].getLocalPort() returned 60030
>>> 2012-10-01 19:54:12,787 [regionserver60020] INFO
>>> org.apache.hadoop.http.HttpServer: Jetty bound to port 60030
>>> 2012-10-01 19:54:12,787 [regionserver60020] INFO org.mortbay.log: jetty-6.1.26
>>> 2012-10-01 19:54:13,079 [regionserver60020] INFO org.mortbay.log:
>>> Started SelectChannelConnector@0.0.0.0:60030
>>> 2012-10-01 19:54:13,079 [IPC Server Responder] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting
>>> 2012-10-01 19:54:13,079 [IPC Server listener on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60020:
>>> starting
>>> 2012-10-01 19:54:13,094 [IPC Server handler 0 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,094 [IPC Server handler 1 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,095 [IPC Server handler 2 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,095 [IPC Server handler 3 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,095 [IPC Server handler 4 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,095 [IPC Server handler 5 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,095 [IPC Server handler 6 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,095 [IPC Server handler 7 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,095 [IPC Server handler 8 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,096 [IPC Server handler 9 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,096 [IPC Server handler 10 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,096 [IPC Server handler 11 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,096 [IPC Server handler 12 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,096 [IPC Server handler 13 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,096 [IPC Server handler 14 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,097 [IPC Server handler 15 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,097 [IPC Server handler 16 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,097 [IPC Server handler 17 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,097 [IPC Server handler 18 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,098 [IPC Server handler 19 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,098 [IPC Server handler 20 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,098 [IPC Server handler 21 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,098 [IPC Server handler 22 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,098 [IPC Server handler 23 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,098 [IPC Server handler 24 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,098 [IPC Server handler 25 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,099 [IPC Server handler 26 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,099 [IPC Server handler 27 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,099 [IPC Server handler 28 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,100 [IPC Server handler 29 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,101 [IPC Server handler 30 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,101 [IPC Server handler 31 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,101 [IPC Server handler 32 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,101 [IPC Server handler 33 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,101 [IPC Server handler 34 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,102 [IPC Server handler 35 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,102 [IPC Server handler 36 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,102 [IPC Server handler 37 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,102 [IPC Server handler 38 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,102 [IPC Server handler 39 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,102 [IPC Server handler 40 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,103 [IPC Server handler 41 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,103 [IPC Server handler 42 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,103 [IPC Server handler 43 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,103 [IPC Server handler 44 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,103 [IPC Server handler 45 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,103 [IPC Server handler 46 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,103 [IPC Server handler 47 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,103 [IPC Server handler 48 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,104 [IPC Server handler 49 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,104 [IPC Server handler 50 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,104 [IPC Server handler 51 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,104 [IPC Server handler 52 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,104 [IPC Server handler 53 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,104 [IPC Server handler 54 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,104 [IPC Server handler 55 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,104 [IPC Server handler 56 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,104 [IPC Server handler 57 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,105 [IPC Server handler 58 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,105 [IPC Server handler 59 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,105 [IPC Server handler 60 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,105 [IPC Server handler 61 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,105 [IPC Server handler 62 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,105 [IPC Server handler 63 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,105 [IPC Server handler 64 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,105 [IPC Server handler 65 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,105 [IPC Server handler 66 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,106 [IPC Server handler 67 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,106 [IPC Server handler 68 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,106 [IPC Server handler 69 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,106 [IPC Server handler 70 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,106 [IPC Server handler 71 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,106 [IPC Server handler 72 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,106 [IPC Server handler 73 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,106 [IPC Server handler 74 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,107 [IPC Server handler 75 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,107 [IPC Server handler 76 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,107 [IPC Server handler 77 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,107 [IPC Server handler 78 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,107 [IPC Server handler 79 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,107 [IPC Server handler 80 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,107 [IPC Server handler 81 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,107 [IPC Server handler 82 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,107 [IPC Server handler 83 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,108 [IPC Server handler 84 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,108 [IPC Server handler 85 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,108 [IPC Server handler 86 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,108 [IPC Server handler 87 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,108 [IPC Server handler 88 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,108 [IPC Server handler 89 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,109 [IPC Server handler 90 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,109 [IPC Server handler 91 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,109 [IPC Server handler 92 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,109 [IPC Server handler 93 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,109 [IPC Server handler 94 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,109 [IPC Server handler 95 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,110 [IPC Server handler 96 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,110 [IPC Server handler 97 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,110 [IPC Server handler 98 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,110 [IPC Server handler 99 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,110 [PRI IPC Server handler 0 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,110 [PRI IPC Server handler 1 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,110 [PRI IPC Server handler 2 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 3 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 4 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 5 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 6 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 7 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 8 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 9 on 60020] INFO
>>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
>>> starting
>>> 2012-10-01 19:54:13,124 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as
>>> data3024.ngpipes.milp.ngmoco.com,60020,1349121252040, RPC listening on
>>> data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020,
>>> sessionid=0x137ec64373dd4b5
>>> 2012-10-01 19:54:13,124
>>> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1349121252040]
>>> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
>>> SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1349121252040
>>> starting
>>> 2012-10-01 19:54:13,125 [regionserver60020] INFO
>>> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered
>>> RegionServer MXBean
>>> 
>>> GC log
>>> ======
>>> 
>>> 1.914: [GC 1.914: [ParNew: 99976K->7646K(118016K), 0.0087130 secs]
>>> 99976K->7646K(123328K), 0.0088110 secs] [Times: user=0.07 sys=0.00,
>>> real=0.00 secs]
>>> 416.341: [GC 416.341: [ParNew: 112558K->12169K(118016K), 0.0447760
>>> secs] 112558K->25025K(133576K), 0.0450080 secs] [Times: user=0.13
>>> sys=0.02, real=0.05 secs]
>>> 416.386: [GC [1 CMS-initial-mark: 12855K(15560K)] 25089K(133576K),
>>> 0.0037570 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 416.390: [CMS-concurrent-mark-start]
>>> 416.407: [CMS-concurrent-mark: 0.015/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 416.407: [CMS-concurrent-preclean-start]
>>> 416.408: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 416.408: [GC[YG occupancy: 12233 K (118016 K)]416.408: [Rescan
>>> (parallel) , 0.0074970 secs]416.416: [weak refs processing, 0.0000370
>>> secs] [1 CMS-remark: 12855K(15560K)] 25089K(133576K), 0.0076480 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 416.416: [CMS-concurrent-sweep-start]
>>> 416.419: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 416.419: [CMS-concurrent-reset-start]
>>> 416.467: [CMS-concurrent-reset: 0.049/0.049 secs] [Times: user=0.01
>>> sys=0.04, real=0.05 secs]
>>> 418.468: [GC [1 CMS-initial-mark: 12855K(21428K)] 26216K(139444K),
>>> 0.0037020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 418.471: [CMS-concurrent-mark-start]
>>> 418.487: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 418.487: [CMS-concurrent-preclean-start]
>>> 418.488: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 418.488: [GC[YG occupancy: 13360 K (118016 K)]418.488: [Rescan
>>> (parallel) , 0.0090770 secs]418.497: [weak refs processing, 0.0000170
>>> secs] [1 CMS-remark: 12855K(21428K)] 26216K(139444K), 0.0092220 secs]
>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>> 418.497: [CMS-concurrent-sweep-start]
>>> 418.500: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 418.500: [CMS-concurrent-reset-start]
>>> 418.511: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 420.512: [GC [1 CMS-initial-mark: 12854K(21428K)] 26344K(139444K),
>>> 0.0041050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 420.516: [CMS-concurrent-mark-start]
>>> 420.532: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
>>> sys=0.01, real=0.01 secs]
>>> 420.532: [CMS-concurrent-preclean-start]
>>> 420.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 420.533: [GC[YG occupancy: 13489 K (118016 K)]420.533: [Rescan
>>> (parallel) , 0.0014850 secs]420.534: [weak refs processing, 0.0000130
>>> secs] [1 CMS-remark: 12854K(21428K)] 26344K(139444K), 0.0015920 secs]
>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 420.534: [CMS-concurrent-sweep-start]
>>> 420.537: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 420.537: [CMS-concurrent-reset-start]
>>> 420.548: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 422.437: [GC [1 CMS-initial-mark: 12854K(21428K)] 28692K(139444K),
>>> 0.0051030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 422.443: [CMS-concurrent-mark-start]
>>> 422.458: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 422.458: [CMS-concurrent-preclean-start]
>>> 422.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 422.458: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 427.541:
>>> [CMS-concurrent-abortable-preclean: 0.678/5.083 secs] [Times:
>>> user=0.66 sys=0.00, real=5.08 secs]
>>> 427.541: [GC[YG occupancy: 16198 K (118016 K)]427.541: [Rescan
>>> (parallel) , 0.0013750 secs]427.543: [weak refs processing, 0.0000140
>>> secs] [1 CMS-remark: 12854K(21428K)] 29053K(139444K), 0.0014800 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 427.543: [CMS-concurrent-sweep-start]
>>> 427.544: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 427.544: [CMS-concurrent-reset-start]
>>> 427.557: [CMS-concurrent-reset: 0.013/0.013 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 429.557: [GC [1 CMS-initial-mark: 12854K(21428K)] 30590K(139444K),
>>> 0.0043280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 429.562: [CMS-concurrent-mark-start]
>>> 429.574: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
>>> sys=0.00, real=0.02 secs]
>>> 429.574: [CMS-concurrent-preclean-start]
>>> 429.575: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 429.575: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 434.626:
>>> [CMS-concurrent-abortable-preclean: 0.747/5.051 secs] [Times:
>>> user=0.74 sys=0.00, real=5.05 secs]
>>> 434.626: [GC[YG occupancy: 18154 K (118016 K)]434.626: [Rescan
>>> (parallel) , 0.0015440 secs]434.627: [weak refs processing, 0.0000140
>>> secs] [1 CMS-remark: 12854K(21428K)] 31009K(139444K), 0.0016500 secs]
>>> [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 434.628: [CMS-concurrent-sweep-start]
>>> 434.629: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 434.629: [CMS-concurrent-reset-start]
>>> 434.641: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 436.641: [GC [1 CMS-initial-mark: 12854K(21428K)] 31137K(139444K),
>>> 0.0043440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 436.646: [CMS-concurrent-mark-start]
>>> 436.660: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 436.660: [CMS-concurrent-preclean-start]
>>> 436.661: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 436.661: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 441.773:
>>> [CMS-concurrent-abortable-preclean: 0.608/5.112 secs] [Times:
>>> user=0.60 sys=0.00, real=5.11 secs]
>>> 441.773: [GC[YG occupancy: 18603 K (118016 K)]441.773: [Rescan
>>> (parallel) , 0.0024270 secs]441.776: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12854K(21428K)] 31458K(139444K), 0.0025200 secs]
>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 441.776: [CMS-concurrent-sweep-start]
>>> 441.777: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 441.777: [CMS-concurrent-reset-start]
>>> 441.788: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 443.788: [GC [1 CMS-initial-mark: 12854K(21428K)] 31586K(139444K),
>>> 0.0044590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 443.793: [CMS-concurrent-mark-start]
>>> 443.804: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.04
>>> sys=0.00, real=0.02 secs]
>>> 443.804: [CMS-concurrent-preclean-start]
>>> 443.805: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 443.805: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 448.821:
>>> [CMS-concurrent-abortable-preclean: 0.813/5.016 secs] [Times:
>>> user=0.81 sys=0.00, real=5.01 secs]
>>> 448.822: [GC[YG occupancy: 19052 K (118016 K)]448.822: [Rescan
>>> (parallel) , 0.0013990 secs]448.823: [weak refs processing, 0.0000140
>>> secs] [1 CMS-remark: 12854K(21428K)] 31907K(139444K), 0.0015040 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 448.823: [CMS-concurrent-sweep-start]
>>> 448.825: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 448.825: [CMS-concurrent-reset-start]
>>> 448.837: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 450.837: [GC [1 CMS-initial-mark: 12854K(21428K)] 32035K(139444K),
>>> 0.0044510 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 450.842: [CMS-concurrent-mark-start]
>>> 450.857: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 450.857: [CMS-concurrent-preclean-start]
>>> 450.858: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 450.858: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 455.922:
>>> [CMS-concurrent-abortable-preclean: 0.726/5.064 secs] [Times:
>>> user=0.73 sys=0.00, real=5.06 secs]
>>> 455.922: [GC[YG occupancy: 19542 K (118016 K)]455.922: [Rescan
>>> (parallel) , 0.0016050 secs]455.924: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12854K(21428K)] 32397K(139444K), 0.0017340 secs]
>>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>>> 455.924: [CMS-concurrent-sweep-start]
>>> 455.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 455.927: [CMS-concurrent-reset-start]
>>> 455.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 457.936: [GC [1 CMS-initial-mark: 12854K(21428K)] 32525K(139444K),
>>> 0.0026740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 457.939: [CMS-concurrent-mark-start]
>>> 457.950: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 457.950: [CMS-concurrent-preclean-start]
>>> 457.950: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 457.950: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 463.065:
>>> [CMS-concurrent-abortable-preclean: 0.708/5.115 secs] [Times:
>>> user=0.71 sys=0.00, real=5.12 secs]
>>> 463.066: [GC[YG occupancy: 19991 K (118016 K)]463.066: [Rescan
>>> (parallel) , 0.0013940 secs]463.067: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12854K(21428K)] 32846K(139444K), 0.0015000 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 463.067: [CMS-concurrent-sweep-start]
>>> 463.070: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 463.070: [CMS-concurrent-reset-start]
>>> 463.080: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 465.080: [GC [1 CMS-initial-mark: 12854K(21428K)] 32974K(139444K),
>>> 0.0027070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 465.083: [CMS-concurrent-mark-start]
>>> 465.096: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 465.096: [CMS-concurrent-preclean-start]
>>> 465.096: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 465.096: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 470.123:
>>> [CMS-concurrent-abortable-preclean: 0.723/5.027 secs] [Times:
>>> user=0.71 sys=0.00, real=5.03 secs]
>>> 470.124: [GC[YG occupancy: 20440 K (118016 K)]470.124: [Rescan
>>> (parallel) , 0.0011990 secs]470.125: [weak refs processing, 0.0000130
>>> secs] [1 CMS-remark: 12854K(21428K)] 33295K(139444K), 0.0012990 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 470.125: [CMS-concurrent-sweep-start]
>>> 470.127: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 470.127: [CMS-concurrent-reset-start]
>>> 470.137: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 472.137: [GC [1 CMS-initial-mark: 12854K(21428K)] 33423K(139444K),
>>> 0.0041330 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 472.141: [CMS-concurrent-mark-start]
>>> 472.155: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.02 secs]
>>> 472.155: [CMS-concurrent-preclean-start]
>>> 472.156: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 472.156: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 477.179:
>>> [CMS-concurrent-abortable-preclean: 0.618/5.023 secs] [Times:
>>> user=0.62 sys=0.00, real=5.02 secs]
>>> 477.179: [GC[YG occupancy: 20889 K (118016 K)]477.179: [Rescan
>>> (parallel) , 0.0014510 secs]477.180: [weak refs processing, 0.0000090
>>> secs] [1 CMS-remark: 12854K(21428K)] 33744K(139444K), 0.0015250 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 477.181: [CMS-concurrent-sweep-start]
>>> 477.183: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 477.183: [CMS-concurrent-reset-start]
>>> 477.192: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 479.192: [GC [1 CMS-initial-mark: 12854K(21428K)] 33872K(139444K),
>>> 0.0039730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 479.196: [CMS-concurrent-mark-start]
>>> 479.209: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 479.209: [CMS-concurrent-preclean-start]
>>> 479.210: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 479.210: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 484.295:
>>> [CMS-concurrent-abortable-preclean: 0.757/5.085 secs] [Times:
>>> user=0.77 sys=0.00, real=5.09 secs]
>>> 484.295: [GC[YG occupancy: 21583 K (118016 K)]484.295: [Rescan
>>> (parallel) , 0.0013210 secs]484.297: [weak refs processing, 0.0000150
>>> secs] [1 CMS-remark: 12854K(21428K)] 34438K(139444K), 0.0014200 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 484.297: [CMS-concurrent-sweep-start]
>>> 484.298: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 484.298: [CMS-concurrent-reset-start]
>>> 484.307: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 486.308: [GC [1 CMS-initial-mark: 12854K(21428K)] 34566K(139444K),
>>> 0.0041800 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 486.312: [CMS-concurrent-mark-start]
>>> 486.324: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
>>> sys=0.00, real=0.02 secs]
>>> 486.324: [CMS-concurrent-preclean-start]
>>> 486.324: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 486.324: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 491.394:
>>> [CMS-concurrent-abortable-preclean: 0.565/5.070 secs] [Times:
>>> user=0.56 sys=0.00, real=5.06 secs]
>>> 491.394: [GC[YG occupancy: 22032 K (118016 K)]491.395: [Rescan
>>> (parallel) , 0.0018030 secs]491.396: [weak refs processing, 0.0000090
>>> secs] [1 CMS-remark: 12854K(21428K)] 34887K(139444K), 0.0018830 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 491.397: [CMS-concurrent-sweep-start]
>>> 491.398: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 491.398: [CMS-concurrent-reset-start]
>>> 491.406: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 493.407: [GC [1 CMS-initial-mark: 12854K(21428K)] 35080K(139444K),
>>> 0.0027620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 493.410: [CMS-concurrent-mark-start]
>>> 493.420: [CMS-concurrent-mark: 0.010/0.010 secs] [Times: user=0.04
>>> sys=0.00, real=0.01 secs]
>>> 493.420: [CMS-concurrent-preclean-start]
>>> 493.420: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 493.420: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 498.525:
>>> [CMS-concurrent-abortable-preclean: 0.600/5.106 secs] [Times:
>>> user=0.61 sys=0.00, real=5.11 secs]
>>> 498.526: [GC[YG occupancy: 22545 K (118016 K)]498.526: [Rescan
>>> (parallel) , 0.0019450 secs]498.528: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12854K(21428K)] 35400K(139444K), 0.0020460 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 498.528: [CMS-concurrent-sweep-start]
>>> 498.530: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 498.530: [CMS-concurrent-reset-start]
>>> 498.538: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 500.538: [GC [1 CMS-initial-mark: 12854K(21428K)] 35529K(139444K),
>>> 0.0027790 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 500.541: [CMS-concurrent-mark-start]
>>> 500.554: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 500.554: [CMS-concurrent-preclean-start]
>>> 500.554: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 500.554: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 505.616:
>>> [CMS-concurrent-abortable-preclean: 0.557/5.062 secs] [Times:
>>> user=0.56 sys=0.00, real=5.06 secs]
>>> 505.617: [GC[YG occupancy: 22995 K (118016 K)]505.617: [Rescan
>>> (parallel) , 0.0023440 secs]505.619: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12854K(21428K)] 35850K(139444K), 0.0024280 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 505.619: [CMS-concurrent-sweep-start]
>>> 505.621: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 505.621: [CMS-concurrent-reset-start]
>>> 505.629: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 507.630: [GC [1 CMS-initial-mark: 12854K(21428K)] 35978K(139444K),
>>> 0.0027500 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 507.632: [CMS-concurrent-mark-start]
>>> 507.645: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>> sys=0.00, real=0.02 secs]
>>> 507.645: [CMS-concurrent-preclean-start]
>>> 507.646: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 507.646: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 512.697:
>>> [CMS-concurrent-abortable-preclean: 0.562/5.051 secs] [Times:
>>> user=0.57 sys=0.00, real=5.05 secs]
>>> 512.697: [GC[YG occupancy: 23484 K (118016 K)]512.697: [Rescan
>>> (parallel) , 0.0020030 secs]512.699: [weak refs processing, 0.0000090
>>> secs] [1 CMS-remark: 12854K(21428K)] 36339K(139444K), 0.0020830 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 512.700: [CMS-concurrent-sweep-start]
>>> 512.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 512.701: [CMS-concurrent-reset-start]
>>> 512.709: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 514.710: [GC [1 CMS-initial-mark: 12854K(21428K)] 36468K(139444K),
>>> 0.0028400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 514.713: [CMS-concurrent-mark-start]
>>> 514.725: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>> sys=0.00, real=0.02 secs]
>>> 514.725: [CMS-concurrent-preclean-start]
>>> 514.725: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 514.725: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 519.800:
>>> [CMS-concurrent-abortable-preclean: 0.619/5.075 secs] [Times:
>>> user=0.66 sys=0.00, real=5.07 secs]
>>> 519.801: [GC[YG occupancy: 25022 K (118016 K)]519.801: [Rescan
>>> (parallel) , 0.0023950 secs]519.803: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12854K(21428K)] 37877K(139444K), 0.0024980 secs]
>>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>>> 519.803: [CMS-concurrent-sweep-start]
>>> 519.805: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 519.805: [CMS-concurrent-reset-start]
>>> 519.813: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 521.814: [GC [1 CMS-initial-mark: 12854K(21428K)] 38005K(139444K),
>>> 0.0045520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 521.818: [CMS-concurrent-mark-start]
>>> 521.833: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 521.833: [CMS-concurrent-preclean-start]
>>> 521.833: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 521.833: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 526.840:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 526.840: [GC[YG occupancy: 25471 K (118016 K)]526.840: [Rescan
>>> (parallel) , 0.0024440 secs]526.843: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12854K(21428K)] 38326K(139444K), 0.0025440 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 526.843: [CMS-concurrent-sweep-start]
>>> 526.845: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 526.845: [CMS-concurrent-reset-start]
>>> 526.853: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 528.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 38449K(139444K),
>>> 0.0045550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 528.858: [CMS-concurrent-mark-start]
>>> 528.872: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 528.872: [CMS-concurrent-preclean-start]
>>> 528.873: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 528.873: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 533.876:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 533.876: [GC[YG occupancy: 25919 K (118016 K)]533.877: [Rescan
>>> (parallel) , 0.0028370 secs]533.879: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 38769K(139444K), 0.0029390 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 533.880: [CMS-concurrent-sweep-start]
>>> 533.882: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 533.882: [CMS-concurrent-reset-start]
>>> 533.891: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 535.891: [GC [1 CMS-initial-mark: 12849K(21428K)] 38897K(139444K),
>>> 0.0046460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 535.896: [CMS-concurrent-mark-start]
>>> 535.910: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 535.910: [CMS-concurrent-preclean-start]
>>> 535.911: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 535.911: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 540.917:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 540.917: [GC[YG occupancy: 26367 K (118016 K)]540.917: [Rescan
>>> (parallel) , 0.0025680 secs]540.920: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 39217K(139444K), 0.0026690 secs]
>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>> 540.920: [CMS-concurrent-sweep-start]
>>> 540.922: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 540.922: [CMS-concurrent-reset-start]
>>> 540.930: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 542.466: [GC [1 CMS-initial-mark: 12849K(21428K)] 39555K(139444K),
>>> 0.0050040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 542.471: [CMS-concurrent-mark-start]
>>> 542.486: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.05
>>> sys=0.00, real=0.02 secs]
>>> 542.486: [CMS-concurrent-preclean-start]
>>> 542.486: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 542.486: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 547.491:
>>> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 547.491: [GC[YG occupancy: 27066 K (118016 K)]547.491: [Rescan
>>> (parallel) , 0.0024720 secs]547.494: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 39916K(139444K), 0.0025720 secs]
>>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>>> 547.494: [CMS-concurrent-sweep-start]
>>> 547.496: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 547.496: [CMS-concurrent-reset-start]
>>> 547.505: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 549.506: [GC [1 CMS-initial-mark: 12849K(21428K)] 40044K(139444K),
>>> 0.0048760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 549.511: [CMS-concurrent-mark-start]
>>> 549.524: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 549.524: [CMS-concurrent-preclean-start]
>>> 549.525: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 549.525: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 554.530:
>>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 554.530: [GC[YG occupancy: 27515 K (118016 K)]554.530: [Rescan
>>> (parallel) , 0.0025270 secs]554.533: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 40364K(139444K), 0.0026190 secs]
>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>> 554.533: [CMS-concurrent-sweep-start]
>>> 554.534: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 554.534: [CMS-concurrent-reset-start]
>>> 554.542: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 556.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 40493K(139444K),
>>> 0.0048950 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 556.548: [CMS-concurrent-mark-start]
>>> 556.562: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 556.562: [CMS-concurrent-preclean-start]
>>> 556.562: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 556.563: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 561.565:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 561.566: [GC[YG occupancy: 27963 K (118016 K)]561.566: [Rescan
>>> (parallel) , 0.0025900 secs]561.568: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 40813K(139444K), 0.0026910 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 561.569: [CMS-concurrent-sweep-start]
>>> 561.570: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 561.570: [CMS-concurrent-reset-start]
>>> 561.578: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 563.579: [GC [1 CMS-initial-mark: 12849K(21428K)] 40941K(139444K),
>>> 0.0049390 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 563.584: [CMS-concurrent-mark-start]
>>> 563.598: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 563.598: [CMS-concurrent-preclean-start]
>>> 563.598: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 563.598: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 568.693:
>>> [CMS-concurrent-abortable-preclean: 0.717/5.095 secs] [Times:
>>> user=0.71 sys=0.00, real=5.09 secs]
>>> 568.694: [GC[YG occupancy: 28411 K (118016 K)]568.694: [Rescan
>>> (parallel) , 0.0035750 secs]568.697: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 41261K(139444K), 0.0036740 secs]
>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>> 568.698: [CMS-concurrent-sweep-start]
>>> 568.700: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 568.700: [CMS-concurrent-reset-start]
>>> 568.709: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 570.709: [GC [1 CMS-initial-mark: 12849K(21428K)] 41389K(139444K),
>>> 0.0048710 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 570.714: [CMS-concurrent-mark-start]
>>> 570.729: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 570.729: [CMS-concurrent-preclean-start]
>>> 570.729: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 570.729: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 575.738:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 575.738: [GC[YG occupancy: 28900 K (118016 K)]575.738: [Rescan
>>> (parallel) , 0.0036390 secs]575.742: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 41750K(139444K), 0.0037440 secs]
>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>> 575.742: [CMS-concurrent-sweep-start]
>>> 575.744: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 575.744: [CMS-concurrent-reset-start]
>>> 575.752: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 577.752: [GC [1 CMS-initial-mark: 12849K(21428K)] 41878K(139444K),
>>> 0.0050100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 577.758: [CMS-concurrent-mark-start]
>>> 577.772: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 577.772: [CMS-concurrent-preclean-start]
>>> 577.773: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 577.773: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 582.779:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 582.779: [GC[YG occupancy: 29348 K (118016 K)]582.779: [Rescan
>>> (parallel) , 0.0026100 secs]582.782: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 42198K(139444K), 0.0027110 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 582.782: [CMS-concurrent-sweep-start]
>>> 582.784: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 582.784: [CMS-concurrent-reset-start]
>>> 582.792: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 584.792: [GC [1 CMS-initial-mark: 12849K(21428K)] 42326K(139444K),
>>> 0.0050510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 584.798: [CMS-concurrent-mark-start]
>>> 584.812: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 584.812: [CMS-concurrent-preclean-start]
>>> 584.813: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 584.813: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 589.819:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 589.819: [GC[YG occupancy: 29797 K (118016 K)]589.819: [Rescan
>>> (parallel) , 0.0039510 secs]589.823: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 42647K(139444K), 0.0040460 secs]
>>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>>> 589.824: [CMS-concurrent-sweep-start]
>>> 589.826: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 589.826: [CMS-concurrent-reset-start]
>>> 589.835: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 591.835: [GC [1 CMS-initial-mark: 12849K(21428K)] 42775K(139444K),
>>> 0.0050090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 591.840: [CMS-concurrent-mark-start]
>>> 591.855: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 591.855: [CMS-concurrent-preclean-start]
>>> 591.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 591.855: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 596.857:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 596.857: [GC[YG occupancy: 31414 K (118016 K)]596.857: [Rescan
>>> (parallel) , 0.0028500 secs]596.860: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 44264K(139444K), 0.0029480 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 596.861: [CMS-concurrent-sweep-start]
>>> 596.862: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 596.862: [CMS-concurrent-reset-start]
>>> 596.870: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 598.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 44392K(139444K),
>>> 0.0050640 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 598.876: [CMS-concurrent-mark-start]
>>> 598.890: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 598.890: [CMS-concurrent-preclean-start]
>>> 598.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 598.891: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 603.897:
>>> [CMS-concurrent-abortable-preclean: 0.705/5.007 secs] [Times:
>>> user=0.72 sys=0.00, real=5.01 secs]
>>> 603.898: [GC[YG occupancy: 32032 K (118016 K)]603.898: [Rescan
>>> (parallel) , 0.0039660 secs]603.902: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 44882K(139444K), 0.0040680 secs]
>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>> 603.902: [CMS-concurrent-sweep-start]
>>> 603.903: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 603.903: [CMS-concurrent-reset-start]
>>> 603.912: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 605.912: [GC [1 CMS-initial-mark: 12849K(21428K)] 45010K(139444K),
>>> 0.0053650 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 605.918: [CMS-concurrent-mark-start]
>>> 605.932: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 605.932: [CMS-concurrent-preclean-start]
>>> 605.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 605.932: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 610.939:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 610.940: [GC[YG occupancy: 32481 K (118016 K)]610.940: [Rescan
>>> (parallel) , 0.0032540 secs]610.943: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 45330K(139444K), 0.0033560 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 610.943: [CMS-concurrent-sweep-start]
>>> 610.944: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 610.945: [CMS-concurrent-reset-start]
>>> 610.953: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 612.486: [GC [1 CMS-initial-mark: 12849K(21428K)] 45459K(139444K),
>>> 0.0055070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 612.492: [CMS-concurrent-mark-start]
>>> 612.505: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 612.505: [CMS-concurrent-preclean-start]
>>> 612.506: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 612.506: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 617.511:
>>> [CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 617.512: [GC[YG occupancy: 32929 K (118016 K)]617.512: [Rescan
>>> (parallel) , 0.0037500 secs]617.516: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 45779K(139444K), 0.0038560 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 617.516: [CMS-concurrent-sweep-start]
>>> 617.518: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 617.518: [CMS-concurrent-reset-start]
>>> 617.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 619.528: [GC [1 CMS-initial-mark: 12849K(21428K)] 45907K(139444K),
>>> 0.0053320 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 619.533: [CMS-concurrent-mark-start]
>>> 619.546: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>> sys=0.00, real=0.02 secs]
>>> 619.546: [CMS-concurrent-preclean-start]
>>> 619.547: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 619.547: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 624.552:
>>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 624.552: [GC[YG occupancy: 33377 K (118016 K)]624.552: [Rescan
>>> (parallel) , 0.0037290 secs]624.556: [weak refs processing, 0.0000130
>>> secs] [1 CMS-remark: 12849K(21428K)] 46227K(139444K), 0.0038330 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 624.556: [CMS-concurrent-sweep-start]
>>> 624.558: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 624.558: [CMS-concurrent-reset-start]
>>> 624.568: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 626.568: [GC [1 CMS-initial-mark: 12849K(21428K)] 46355K(139444K),
>>> 0.0054240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 626.574: [CMS-concurrent-mark-start]
>>> 626.588: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 626.588: [CMS-concurrent-preclean-start]
>>> 626.588: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 626.588: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 631.592:
>>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 631.592: [GC[YG occupancy: 33825 K (118016 K)]631.593: [Rescan
>>> (parallel) , 0.0041600 secs]631.597: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 46675K(139444K), 0.0042650 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 631.597: [CMS-concurrent-sweep-start]
>>> 631.598: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 631.598: [CMS-concurrent-reset-start]
>>> 631.607: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 632.495: [GC [1 CMS-initial-mark: 12849K(21428K)] 46839K(139444K),
>>> 0.0054380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 632.501: [CMS-concurrent-mark-start]
>>> 632.516: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 632.516: [CMS-concurrent-preclean-start]
>>> 632.517: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 632.517: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 637.519:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 637.519: [GC[YG occupancy: 34350 K (118016 K)]637.519: [Rescan
>>> (parallel) , 0.0025310 secs]637.522: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 47200K(139444K), 0.0026540 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 637.522: [CMS-concurrent-sweep-start]
>>> 637.523: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 637.523: [CMS-concurrent-reset-start]
>>> 637.532: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 639.532: [GC [1 CMS-initial-mark: 12849K(21428K)] 47328K(139444K),
>>> 0.0055330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 639.538: [CMS-concurrent-mark-start]
>>> 639.551: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 639.551: [CMS-concurrent-preclean-start]
>>> 639.552: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 639.552: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 644.561:
>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 644.561: [GC[YG occupancy: 34798 K (118016 K)]644.561: [Rescan
>>> (parallel) , 0.0040620 secs]644.565: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 47648K(139444K), 0.0041610 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 644.566: [CMS-concurrent-sweep-start]
>>> 644.568: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 644.568: [CMS-concurrent-reset-start]
>>> 644.577: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 646.577: [GC [1 CMS-initial-mark: 12849K(21428K)] 47776K(139444K),
>>> 0.0054660 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 646.583: [CMS-concurrent-mark-start]
>>> 646.596: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 646.596: [CMS-concurrent-preclean-start]
>>> 646.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 646.597: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 651.678:
>>> [CMS-concurrent-abortable-preclean: 0.732/5.081 secs] [Times:
>>> user=0.74 sys=0.00, real=5.08 secs]
>>> 651.678: [GC[YG occupancy: 35246 K (118016 K)]651.678: [Rescan
>>> (parallel) , 0.0025920 secs]651.681: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 48096K(139444K), 0.0026910 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 651.681: [CMS-concurrent-sweep-start]
>>> 651.682: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 651.682: [CMS-concurrent-reset-start]
>>> 651.690: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 653.691: [GC [1 CMS-initial-mark: 12849K(21428K)] 48224K(139444K),
>>> 0.0055640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 653.696: [CMS-concurrent-mark-start]
>>> 653.711: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 653.711: [CMS-concurrent-preclean-start]
>>> 653.711: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 653.711: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 658.721:
>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 658.721: [GC[YG occupancy: 35695 K (118016 K)]658.721: [Rescan
>>> (parallel) , 0.0040160 secs]658.725: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 48545K(139444K), 0.0041130 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 658.725: [CMS-concurrent-sweep-start]
>>> 658.727: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 658.728: [CMS-concurrent-reset-start]
>>> 658.737: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 660.737: [GC [1 CMS-initial-mark: 12849K(21428K)] 48673K(139444K),
>>> 0.0055230 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 660.743: [CMS-concurrent-mark-start]
>>> 660.756: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 660.756: [CMS-concurrent-preclean-start]
>>> 660.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 660.757: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 665.767:
>>> [CMS-concurrent-abortable-preclean: 0.704/5.011 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 665.768: [GC[YG occupancy: 36289 K (118016 K)]665.768: [Rescan
>>> (parallel) , 0.0033040 secs]665.771: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 49139K(139444K), 0.0034090 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 665.771: [CMS-concurrent-sweep-start]
>>> 665.773: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 665.773: [CMS-concurrent-reset-start]
>>> 665.781: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 667.781: [GC [1 CMS-initial-mark: 12849K(21428K)] 49267K(139444K),
>>> 0.0057830 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 667.787: [CMS-concurrent-mark-start]
>>> 667.802: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 667.802: [CMS-concurrent-preclean-start]
>>> 667.802: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 667.802: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 672.809:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 672.810: [GC[YG occupancy: 36737 K (118016 K)]672.810: [Rescan
>>> (parallel) , 0.0037010 secs]672.813: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 49587K(139444K), 0.0038010 secs]
>>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>>> 672.814: [CMS-concurrent-sweep-start]
>>> 672.815: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 672.815: [CMS-concurrent-reset-start]
>>> 672.824: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 674.824: [GC [1 CMS-initial-mark: 12849K(21428K)] 49715K(139444K),
>>> 0.0058920 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 674.830: [CMS-concurrent-mark-start]
>>> 674.845: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 674.845: [CMS-concurrent-preclean-start]
>>> 674.845: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 674.845: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 679.849:
>>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 679.850: [GC[YG occupancy: 37185 K (118016 K)]679.850: [Rescan
>>> (parallel) , 0.0033420 secs]679.853: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 50035K(139444K), 0.0034440 secs]
>>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>>> 679.853: [CMS-concurrent-sweep-start]
>>> 679.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 679.855: [CMS-concurrent-reset-start]
>>> 679.863: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 681.864: [GC [1 CMS-initial-mark: 12849K(21428K)] 50163K(139444K),
>>> 0.0058780 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 681.870: [CMS-concurrent-mark-start]
>>> 681.884: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 681.884: [CMS-concurrent-preclean-start]
>>> 681.884: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 681.884: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 686.890:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 686.891: [GC[YG occupancy: 37634 K (118016 K)]686.891: [Rescan
>>> (parallel) , 0.0044480 secs]686.895: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 50483K(139444K), 0.0045570 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 686.896: [CMS-concurrent-sweep-start]
>>> 686.897: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 686.897: [CMS-concurrent-reset-start]
>>> 686.905: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 688.905: [GC [1 CMS-initial-mark: 12849K(21428K)] 50612K(139444K),
>>> 0.0058940 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 688.911: [CMS-concurrent-mark-start]
>>> 688.925: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 688.925: [CMS-concurrent-preclean-start]
>>> 688.925: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 688.926: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 694.041:
>>> [CMS-concurrent-abortable-preclean: 0.718/5.115 secs] [Times:
>>> user=0.72 sys=0.00, real=5.11 secs]
>>> 694.041: [GC[YG occupancy: 38122 K (118016 K)]694.041: [Rescan
>>> (parallel) , 0.0028640 secs]694.044: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 50972K(139444K), 0.0029660 secs]
>>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>>> 694.044: [CMS-concurrent-sweep-start]
>>> 694.046: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 694.046: [CMS-concurrent-reset-start]
>>> 694.054: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 696.054: [GC [1 CMS-initial-mark: 12849K(21428K)] 51100K(139444K),
>>> 0.0060550 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 696.060: [CMS-concurrent-mark-start]
>>> 696.074: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 696.074: [CMS-concurrent-preclean-start]
>>> 696.075: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 696.075: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 701.078:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 701.079: [GC[YG occupancy: 38571 K (118016 K)]701.079: [Rescan
>>> (parallel) , 0.0064210 secs]701.085: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 51421K(139444K), 0.0065220 secs]
>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>> 701.085: [CMS-concurrent-sweep-start]
>>> 701.087: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 701.088: [CMS-concurrent-reset-start]
>>> 701.097: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 703.097: [GC [1 CMS-initial-mark: 12849K(21428K)] 51549K(139444K),
>>> 0.0058470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 703.103: [CMS-concurrent-mark-start]
>>> 703.116: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>>> sys=0.00, real=0.02 secs]
>>> 703.116: [CMS-concurrent-preclean-start]
>>> 703.117: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 703.117: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 708.125:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 708.125: [GC[YG occupancy: 39054 K (118016 K)]708.125: [Rescan
>>> (parallel) , 0.0037190 secs]708.129: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 51904K(139444K), 0.0038220 secs]
>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>> 708.129: [CMS-concurrent-sweep-start]
>>> 708.131: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 708.131: [CMS-concurrent-reset-start]
>>> 708.139: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 710.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 52032K(139444K),
>>> 0.0059770 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 710.145: [CMS-concurrent-mark-start]
>>> 710.158: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 710.158: [CMS-concurrent-preclean-start]
>>> 710.158: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 710.158: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 715.169:
>>> [CMS-concurrent-abortable-preclean: 0.705/5.011 secs] [Times:
>>> user=0.69 sys=0.01, real=5.01 secs]
>>> 715.169: [GC[YG occupancy: 39503 K (118016 K)]715.169: [Rescan
>>> (parallel) , 0.0042370 secs]715.173: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 52353K(139444K), 0.0043410 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 715.174: [CMS-concurrent-sweep-start]
>>> 715.176: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 715.176: [CMS-concurrent-reset-start]
>>> 715.185: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 717.185: [GC [1 CMS-initial-mark: 12849K(21428K)] 52481K(139444K),
>>> 0.0060050 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 717.191: [CMS-concurrent-mark-start]
>>> 717.205: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 717.205: [CMS-concurrent-preclean-start]
>>> 717.206: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 717.206: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 722.209:
>>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>>> user=0.71 sys=0.00, real=5.00 secs]
>>> 722.210: [GC[YG occupancy: 40161 K (118016 K)]722.210: [Rescan
>>> (parallel) , 0.0041630 secs]722.214: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 53011K(139444K), 0.0042630 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 722.214: [CMS-concurrent-sweep-start]
>>> 722.216: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 722.216: [CMS-concurrent-reset-start]
>>> 722.226: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 722.521: [GC [1 CMS-initial-mark: 12849K(21428K)] 53099K(139444K),
>>> 0.0062380 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 722.528: [CMS-concurrent-mark-start]
>>> 722.544: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.05
>>> sys=0.01, real=0.02 secs]
>>> 722.544: [CMS-concurrent-preclean-start]
>>> 722.544: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 722.544: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 727.558:
>>> [CMS-concurrent-abortable-preclean: 0.709/5.014 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 727.558: [GC[YG occupancy: 40610 K (118016 K)]727.558: [Rescan
>>> (parallel) , 0.0041700 secs]727.563: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 53460K(139444K), 0.0042780 secs]
>>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>>> 727.563: [CMS-concurrent-sweep-start]
>>> 727.564: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 727.564: [CMS-concurrent-reset-start]
>>> 727.573: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.02 secs]
>>> 729.574: [GC [1 CMS-initial-mark: 12849K(21428K)] 53588K(139444K),
>>> 0.0062700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 729.580: [CMS-concurrent-mark-start]
>>> 729.595: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.02 secs]
>>> 729.595: [CMS-concurrent-preclean-start]
>>> 729.595: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 729.595: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 734.597:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 734.597: [GC[YG occupancy: 41058 K (118016 K)]734.597: [Rescan
>>> (parallel) , 0.0053870 secs]734.603: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 53908K(139444K), 0.0054870 secs]
>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>> 734.603: [CMS-concurrent-sweep-start]
>>> 734.604: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 734.604: [CMS-concurrent-reset-start]
>>> 734.614: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 734.877: [GC [1 CMS-initial-mark: 12849K(21428K)] 53908K(139444K),
>>> 0.0067230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 734.884: [CMS-concurrent-mark-start]
>>> 734.899: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 734.899: [CMS-concurrent-preclean-start]
>>> 734.899: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 734.899: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 739.905:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 739.906: [GC[YG occupancy: 41379 K (118016 K)]739.906: [Rescan
>>> (parallel) , 0.0050680 secs]739.911: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 54228K(139444K), 0.0051690 secs]
>>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>>> 739.911: [CMS-concurrent-sweep-start]
>>> 739.912: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 739.912: [CMS-concurrent-reset-start]
>>> 739.921: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 741.922: [GC [1 CMS-initial-mark: 12849K(21428K)] 54356K(139444K),
>>> 0.0062880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 741.928: [CMS-concurrent-mark-start]
>>> 741.942: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 741.942: [CMS-concurrent-preclean-start]
>>> 741.943: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 741.943: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 747.059:
>>> [CMS-concurrent-abortable-preclean: 0.711/5.117 secs] [Times:
>>> user=0.71 sys=0.00, real=5.12 secs]
>>> 747.060: [GC[YG occupancy: 41827 K (118016 K)]747.060: [Rescan
>>> (parallel) , 0.0051040 secs]747.065: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 54677K(139444K), 0.0052090 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 747.065: [CMS-concurrent-sweep-start]
>>> 747.067: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 747.067: [CMS-concurrent-reset-start]
>>> 747.075: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 749.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 54805K(139444K),
>>> 0.0063470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 749.082: [CMS-concurrent-mark-start]
>>> 749.095: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 749.095: [CMS-concurrent-preclean-start]
>>> 749.096: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 749.096: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 754.175:
>>> [CMS-concurrent-abortable-preclean: 0.718/5.079 secs] [Times:
>>> user=0.72 sys=0.00, real=5.08 secs]
>>> 754.175: [GC[YG occupancy: 42423 K (118016 K)]754.175: [Rescan
>>> (parallel) , 0.0051290 secs]754.180: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 55273K(139444K), 0.0052290 secs]
>>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>>> 754.181: [CMS-concurrent-sweep-start]
>>> 754.182: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 754.182: [CMS-concurrent-reset-start]
>>> 754.191: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 756.191: [GC [1 CMS-initial-mark: 12849K(21428K)] 55401K(139444K),
>>> 0.0064020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 756.198: [CMS-concurrent-mark-start]
>>> 756.212: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 756.212: [CMS-concurrent-preclean-start]
>>> 756.213: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 756.213: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 761.217:
>>> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 761.218: [GC[YG occupancy: 42871 K (118016 K)]761.218: [Rescan
>>> (parallel) , 0.0052310 secs]761.223: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 55721K(139444K), 0.0053300 secs]
>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>> 761.223: [CMS-concurrent-sweep-start]
>>> 761.225: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 761.225: [CMS-concurrent-reset-start]
>>> 761.234: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 763.234: [GC [1 CMS-initial-mark: 12849K(21428K)] 55849K(139444K),
>>> 0.0045400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 763.239: [CMS-concurrent-mark-start]
>>> 763.253: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 763.253: [CMS-concurrent-preclean-start]
>>> 763.253: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 763.253: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 768.348:
>>> [CMS-concurrent-abortable-preclean: 0.690/5.095 secs] [Times:
>>> user=0.69 sys=0.00, real=5.10 secs]
>>> 768.349: [GC[YG occupancy: 43320 K (118016 K)]768.349: [Rescan
>>> (parallel) , 0.0045140 secs]768.353: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 56169K(139444K), 0.0046170 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 768.353: [CMS-concurrent-sweep-start]
>>> 768.356: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 768.356: [CMS-concurrent-reset-start]
>>> 768.365: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 770.365: [GC [1 CMS-initial-mark: 12849K(21428K)] 56298K(139444K),
>>> 0.0063950 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 770.372: [CMS-concurrent-mark-start]
>>> 770.388: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 770.388: [CMS-concurrent-preclean-start]
>>> 770.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 770.388: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 775.400:
>>> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 775.401: [GC[YG occupancy: 43768 K (118016 K)]775.401: [Rescan
>>> (parallel) , 0.0043990 secs]775.405: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 56618K(139444K), 0.0045000 secs]
>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>> 775.405: [CMS-concurrent-sweep-start]
>>> 775.407: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 775.407: [CMS-concurrent-reset-start]
>>> 775.417: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 777.417: [GC [1 CMS-initial-mark: 12849K(21428K)] 56746K(139444K),
>>> 0.0064580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 777.423: [CMS-concurrent-mark-start]
>>> 777.438: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 777.438: [CMS-concurrent-preclean-start]
>>> 777.439: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 777.439: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 782.448:
>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 782.448: [GC[YG occupancy: 44321 K (118016 K)]782.448: [Rescan
>>> (parallel) , 0.0054760 secs]782.454: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 57171K(139444K), 0.0055780 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 782.454: [CMS-concurrent-sweep-start]
>>> 782.455: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 782.455: [CMS-concurrent-reset-start]
>>> 782.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 782.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 57235K(139444K),
>>> 0.0066970 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 782.550: [CMS-concurrent-mark-start]
>>> 782.567: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 782.567: [CMS-concurrent-preclean-start]
>>> 782.568: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 782.568: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 787.574:
>>> [CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 787.574: [GC[YG occupancy: 44746 K (118016 K)]787.574: [Rescan
>>> (parallel) , 0.0049170 secs]787.579: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 57596K(139444K), 0.0050210 secs]
>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>> 787.579: [CMS-concurrent-sweep-start]
>>> 787.581: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 787.581: [CMS-concurrent-reset-start]
>>> 787.590: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 789.591: [GC [1 CMS-initial-mark: 12849K(21428K)] 57724K(139444K),
>>> 0.0066850 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 789.598: [CMS-concurrent-mark-start]
>>> 789.614: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 789.614: [CMS-concurrent-preclean-start]
>>> 789.615: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 789.615: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 794.626:
>>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 794.627: [GC[YG occupancy: 45195 K (118016 K)]794.627: [Rescan
>>> (parallel) , 0.0056520 secs]794.632: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 58044K(139444K), 0.0057510 secs]
>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>> 794.632: [CMS-concurrent-sweep-start]
>>> 794.634: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 794.634: [CMS-concurrent-reset-start]
>>> 794.643: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 796.643: [GC [1 CMS-initial-mark: 12849K(21428K)] 58172K(139444K),
>>> 0.0067410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 796.650: [CMS-concurrent-mark-start]
>>> 796.666: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 796.666: [CMS-concurrent-preclean-start]
>>> 796.667: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 796.667: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 801.670:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 801.670: [GC[YG occupancy: 45643 K (118016 K)]801.670: [Rescan
>>> (parallel) , 0.0043550 secs]801.675: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 58493K(139444K), 0.0044580 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 801.675: [CMS-concurrent-sweep-start]
>>> 801.677: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 801.677: [CMS-concurrent-reset-start]
>>> 801.686: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 803.686: [GC [1 CMS-initial-mark: 12849K(21428K)] 58621K(139444K),
>>> 0.0067250 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 803.693: [CMS-concurrent-mark-start]
>>> 803.708: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 803.708: [CMS-concurrent-preclean-start]
>>> 803.709: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 803.709: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 808.717:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 808.717: [GC[YG occupancy: 46091 K (118016 K)]808.717: [Rescan
>>> (parallel) , 0.0034790 secs]808.720: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 58941K(139444K), 0.0035820 secs]
>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>> 808.721: [CMS-concurrent-sweep-start]
>>> 808.722: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 808.722: [CMS-concurrent-reset-start]
>>> 808.730: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 810.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 59069K(139444K),
>>> 0.0067580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 810.738: [CMS-concurrent-mark-start]
>>> 810.755: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 810.755: [CMS-concurrent-preclean-start]
>>> 810.755: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 810.755: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 815.823:
>>> [CMS-concurrent-abortable-preclean: 0.715/5.068 secs] [Times:
>>> user=0.72 sys=0.00, real=5.06 secs]
>>> 815.824: [GC[YG occupancy: 46580 K (118016 K)]815.824: [Rescan
>>> (parallel) , 0.0048490 secs]815.829: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 59430K(139444K), 0.0049600 secs]
>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>> 815.829: [CMS-concurrent-sweep-start]
>>> 815.831: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 815.831: [CMS-concurrent-reset-start]
>>> 815.840: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 817.840: [GC [1 CMS-initial-mark: 12849K(21428K)] 59558K(139444K),
>>> 0.0068880 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 817.847: [CMS-concurrent-mark-start]
>>> 817.864: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 817.864: [CMS-concurrent-preclean-start]
>>> 817.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 817.865: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 822.868:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>>> user=0.69 sys=0.01, real=5.00 secs]
>>> 822.868: [GC[YG occupancy: 47028 K (118016 K)]822.868: [Rescan
>>> (parallel) , 0.0061120 secs]822.874: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 59878K(139444K), 0.0062150 secs]
>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>> 822.874: [CMS-concurrent-sweep-start]
>>> 822.876: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 822.876: [CMS-concurrent-reset-start]
>>> 822.885: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 824.885: [GC [1 CMS-initial-mark: 12849K(21428K)] 60006K(139444K),
>>> 0.0068610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 824.892: [CMS-concurrent-mark-start]
>>> 824.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 824.908: [CMS-concurrent-preclean-start]
>>> 824.908: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 824.908: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 829.914:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 829.915: [GC[YG occupancy: 47477 K (118016 K)]829.915: [Rescan
>>> (parallel) , 0.0034890 secs]829.918: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 60327K(139444K), 0.0035930 secs]
>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>> 829.918: [CMS-concurrent-sweep-start]
>>> 829.920: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 829.920: [CMS-concurrent-reset-start]
>>> 829.930: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 831.930: [GC [1 CMS-initial-mark: 12849K(21428K)] 60455K(139444K),
>>> 0.0069040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 831.937: [CMS-concurrent-mark-start]
>>> 831.953: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 831.953: [CMS-concurrent-preclean-start]
>>> 831.954: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 831.954: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 836.957:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.71 sys=0.00, real=5.00 secs]
>>> 836.957: [GC[YG occupancy: 47925 K (118016 K)]836.957: [Rescan
>>> (parallel) , 0.0060440 secs]836.963: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 60775K(139444K), 0.0061520 secs]
>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>> 836.964: [CMS-concurrent-sweep-start]
>>> 836.965: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 836.965: [CMS-concurrent-reset-start]
>>> 836.974: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 838.974: [GC [1 CMS-initial-mark: 12849K(21428K)] 60903K(139444K),
>>> 0.0069860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 838.982: [CMS-concurrent-mark-start]
>>> 838.997: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 838.998: [CMS-concurrent-preclean-start]
>>> 838.998: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 838.998: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 844.091:
>>> [CMS-concurrent-abortable-preclean: 0.718/5.093 secs] [Times:
>>> user=0.72 sys=0.00, real=5.09 secs]
>>> 844.092: [GC[YG occupancy: 48731 K (118016 K)]844.092: [Rescan
>>> (parallel) , 0.0052610 secs]844.097: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 61581K(139444K), 0.0053620 secs]
>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>> 844.097: [CMS-concurrent-sweep-start]
>>> 844.099: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 844.099: [CMS-concurrent-reset-start]
>>> 844.108: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 846.109: [GC [1 CMS-initial-mark: 12849K(21428K)] 61709K(139444K),
>>> 0.0071980 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 846.116: [CMS-concurrent-mark-start]
>>> 846.133: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 846.133: [CMS-concurrent-preclean-start]
>>> 846.134: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 846.134: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 851.137:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 851.137: [GC[YG occupancy: 49180 K (118016 K)]851.137: [Rescan
>>> (parallel) , 0.0061320 secs]851.143: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 62030K(139444K), 0.0062320 secs]
>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>> 851.144: [CMS-concurrent-sweep-start]
>>> 851.145: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 851.145: [CMS-concurrent-reset-start]
>>> 851.154: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 853.154: [GC [1 CMS-initial-mark: 12849K(21428K)] 62158K(139444K),
>>> 0.0071610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 853.162: [CMS-concurrent-mark-start]
>>> 853.177: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 853.177: [CMS-concurrent-preclean-start]
>>> 853.178: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 853.178: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 858.181:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 858.181: [GC[YG occupancy: 49628 K (118016 K)]858.181: [Rescan
>>> (parallel) , 0.0029560 secs]858.184: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 62478K(139444K), 0.0030590 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 858.184: [CMS-concurrent-sweep-start]
>>> 858.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 858.186: [CMS-concurrent-reset-start]
>>> 858.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 860.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 62606K(139444K),
>>> 0.0072070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 860.203: [CMS-concurrent-mark-start]
>>> 860.219: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 860.219: [CMS-concurrent-preclean-start]
>>> 860.219: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 860.219: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 865.226:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 865.227: [GC[YG occupancy: 50076 K (118016 K)]865.227: [Rescan
>>> (parallel) , 0.0066610 secs]865.233: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 62926K(139444K), 0.0067670 secs]
>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>> 865.233: [CMS-concurrent-sweep-start]
>>> 865.235: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 865.235: [CMS-concurrent-reset-start]
>>> 865.244: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 867.244: [GC [1 CMS-initial-mark: 12849K(21428K)] 63054K(139444K),
>>> 0.0072490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 867.252: [CMS-concurrent-mark-start]
>>> 867.267: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 867.267: [CMS-concurrent-preclean-start]
>>> 867.268: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 867.268: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 872.281:
>>> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 872.281: [GC[YG occupancy: 50525 K (118016 K)]872.281: [Rescan
>>> (parallel) , 0.0053780 secs]872.286: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 63375K(139444K), 0.0054790 secs]
>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>> 872.287: [CMS-concurrent-sweep-start]
>>> 872.288: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 872.288: [CMS-concurrent-reset-start]
>>> 872.296: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 872.572: [GC [1 CMS-initial-mark: 12849K(21428K)] 63439K(139444K),
>>> 0.0073060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 872.580: [CMS-concurrent-mark-start]
>>> 872.597: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 872.597: [CMS-concurrent-preclean-start]
>>> 872.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 872.597: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 877.600:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 877.601: [GC[YG occupancy: 51049 K (118016 K)]877.601: [Rescan
>>> (parallel) , 0.0063070 secs]877.607: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 63899K(139444K), 0.0064090 secs]
>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>> 877.607: [CMS-concurrent-sweep-start]
>>> 877.609: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 877.609: [CMS-concurrent-reset-start]
>>> 877.619: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 879.619: [GC [1 CMS-initial-mark: 12849K(21428K)] 64027K(139444K),
>>> 0.0073320 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 879.626: [CMS-concurrent-mark-start]
>>> 879.643: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 879.643: [CMS-concurrent-preclean-start]
>>> 879.644: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 879.644: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 884.657:
>>> [CMS-concurrent-abortable-preclean: 0.708/5.014 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 884.658: [GC[YG occupancy: 51497 K (118016 K)]884.658: [Rescan
>>> (parallel) , 0.0056160 secs]884.663: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 64347K(139444K), 0.0057150 secs]
>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>> 884.663: [CMS-concurrent-sweep-start]
>>> 884.665: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 884.665: [CMS-concurrent-reset-start]
>>> 884.674: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 886.674: [GC [1 CMS-initial-mark: 12849K(21428K)] 64475K(139444K),
>>> 0.0073420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 886.682: [CMS-concurrent-mark-start]
>>> 886.698: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 886.698: [CMS-concurrent-preclean-start]
>>> 886.698: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 886.698: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 891.702:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 891.702: [GC[YG occupancy: 51945 K (118016 K)]891.702: [Rescan
>>> (parallel) , 0.0070120 secs]891.709: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 64795K(139444K), 0.0071150 secs]
>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>> 891.709: [CMS-concurrent-sweep-start]
>>> 891.711: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 891.711: [CMS-concurrent-reset-start]
>>> 891.721: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 893.721: [GC [1 CMS-initial-mark: 12849K(21428K)] 64923K(139444K),
>>> 0.0073880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 893.728: [CMS-concurrent-mark-start]
>>> 893.745: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 893.745: [CMS-concurrent-preclean-start]
>>> 893.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 893.745: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 898.852:
>>> [CMS-concurrent-abortable-preclean: 0.715/5.107 secs] [Times:
>>> user=0.71 sys=0.00, real=5.10 secs]
>>> 898.853: [GC[YG occupancy: 53466 K (118016 K)]898.853: [Rescan
>>> (parallel) , 0.0060600 secs]898.859: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 66315K(139444K), 0.0061640 secs]
>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>> 898.859: [CMS-concurrent-sweep-start]
>>> 898.861: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 898.861: [CMS-concurrent-reset-start]
>>> 898.870: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 900.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 66444K(139444K),
>>> 0.0074670 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 900.878: [CMS-concurrent-mark-start]
>>> 900.895: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 900.895: [CMS-concurrent-preclean-start]
>>> 900.896: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 900.896: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 905.969:
>>> [CMS-concurrent-abortable-preclean: 0.716/5.074 secs] [Times:
>>> user=0.72 sys=0.01, real=5.07 secs]
>>> 905.969: [GC[YG occupancy: 54157 K (118016 K)]905.970: [Rescan
>>> (parallel) , 0.0068200 secs]905.976: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 67007K(139444K), 0.0069250 secs]
>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>> 905.977: [CMS-concurrent-sweep-start]
>>> 905.978: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 905.978: [CMS-concurrent-reset-start]
>>> 905.986: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 907.986: [GC [1 CMS-initial-mark: 12849K(21428K)] 67135K(139444K),
>>> 0.0076010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 907.994: [CMS-concurrent-mark-start]
>>> 908.009: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 908.009: [CMS-concurrent-preclean-start]
>>> 908.010: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 908.010: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 913.013:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.69 sys=0.01, real=5.00 secs]
>>> 913.013: [GC[YG occupancy: 54606 K (118016 K)]913.013: [Rescan
>>> (parallel) , 0.0053650 secs]913.018: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 67455K(139444K), 0.0054650 secs]
>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>> 913.019: [CMS-concurrent-sweep-start]
>>> 913.021: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 913.021: [CMS-concurrent-reset-start]
>>> 913.030: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 915.030: [GC [1 CMS-initial-mark: 12849K(21428K)] 67583K(139444K),
>>> 0.0076410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 915.038: [CMS-concurrent-mark-start]
>>> 915.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 915.055: [CMS-concurrent-preclean-start]
>>> 915.056: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 915.056: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 920.058:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 920.058: [GC[YG occupancy: 55054 K (118016 K)]920.058: [Rescan
>>> (parallel) , 0.0058380 secs]920.064: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 67904K(139444K), 0.0059420 secs]
>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>> 920.064: [CMS-concurrent-sweep-start]
>>> 920.066: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 920.066: [CMS-concurrent-reset-start]
>>> 920.075: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.01, real=0.01 secs]
>>> 922.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 68032K(139444K),
>>> 0.0075820 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 922.083: [CMS-concurrent-mark-start]
>>> 922.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 922.098: [CMS-concurrent-preclean-start]
>>> 922.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 922.099: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 927.102:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 927.102: [GC[YG occupancy: 55502 K (118016 K)]927.102: [Rescan
>>> (parallel) , 0.0059190 secs]927.108: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 68352K(139444K), 0.0060220 secs]
>>> [Times: user=0.06 sys=0.01, real=0.01 secs]
>>> 927.108: [CMS-concurrent-sweep-start]
>>> 927.110: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 927.110: [CMS-concurrent-reset-start]
>>> 927.120: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 929.120: [GC [1 CMS-initial-mark: 12849K(21428K)] 68480K(139444K),
>>> 0.0077620 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 929.128: [CMS-concurrent-mark-start]
>>> 929.145: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 929.145: [CMS-concurrent-preclean-start]
>>> 929.145: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 929.145: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 934.237:
>>> [CMS-concurrent-abortable-preclean: 0.717/5.092 secs] [Times:
>>> user=0.72 sys=0.00, real=5.09 secs]
>>> 934.238: [GC[YG occupancy: 55991 K (118016 K)]934.238: [Rescan
>>> (parallel) , 0.0042660 secs]934.242: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 68841K(139444K), 0.0043660 secs]
>>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>>> 934.242: [CMS-concurrent-sweep-start]
>>> 934.244: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 934.244: [CMS-concurrent-reset-start]
>>> 934.252: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 936.253: [GC [1 CMS-initial-mark: 12849K(21428K)] 68969K(139444K),
>>> 0.0077340 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 936.261: [CMS-concurrent-mark-start]
>>> 936.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 936.277: [CMS-concurrent-preclean-start]
>>> 936.278: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 936.278: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 941.284:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 941.284: [GC[YG occupancy: 56439 K (118016 K)]941.284: [Rescan
>>> (parallel) , 0.0059460 secs]941.290: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 69289K(139444K), 0.0060470 secs]
>>> [Times: user=0.08 sys=0.00, real=0.00 secs]
>>> 941.290: [CMS-concurrent-sweep-start]
>>> 941.293: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 941.293: [CMS-concurrent-reset-start]
>>> 941.302: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 943.302: [GC [1 CMS-initial-mark: 12849K(21428K)] 69417K(139444K),
>>> 0.0077760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 943.310: [CMS-concurrent-mark-start]
>>> 943.326: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 943.326: [CMS-concurrent-preclean-start]
>>> 943.327: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 943.327: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 948.340:
>>> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 948.340: [GC[YG occupancy: 56888 K (118016 K)]948.340: [Rescan
>>> (parallel) , 0.0047760 secs]948.345: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 69738K(139444K), 0.0048770 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 948.345: [CMS-concurrent-sweep-start]
>>> 948.347: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 948.347: [CMS-concurrent-reset-start]
>>> 948.356: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 950.356: [GC [1 CMS-initial-mark: 12849K(21428K)] 69866K(139444K),
>>> 0.0077750 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 950.364: [CMS-concurrent-mark-start]
>>> 950.380: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 950.380: [CMS-concurrent-preclean-start]
>>> 950.380: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 950.380: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 955.384:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 955.384: [GC[YG occupancy: 57336 K (118016 K)]955.384: [Rescan
>>> (parallel) , 0.0072540 secs]955.392: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 70186K(139444K), 0.0073540 secs]
>>> [Times: user=0.08 sys=0.00, real=0.00 secs]
>>> 955.392: [CMS-concurrent-sweep-start]
>>> 955.394: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 955.394: [CMS-concurrent-reset-start]
>>> 955.403: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 957.403: [GC [1 CMS-initial-mark: 12849K(21428K)] 70314K(139444K),
>>> 0.0078120 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 957.411: [CMS-concurrent-mark-start]
>>> 957.427: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 957.427: [CMS-concurrent-preclean-start]
>>> 957.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 957.427: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 962.437:
>>> [CMS-concurrent-abortable-preclean: 0.704/5.010 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 962.437: [GC[YG occupancy: 57889 K (118016 K)]962.437: [Rescan
>>> (parallel) , 0.0076140 secs]962.445: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 70739K(139444K), 0.0077160 secs]
>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>> 962.445: [CMS-concurrent-sweep-start]
>>> 962.446: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 962.446: [CMS-concurrent-reset-start]
>>> 962.456: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 962.599: [GC [1 CMS-initial-mark: 12849K(21428K)] 70827K(139444K),
>>> 0.0081180 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 962.608: [CMS-concurrent-mark-start]
>>> 962.626: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 962.626: [CMS-concurrent-preclean-start]
>>> 962.626: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 962.626: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 967.632:
>>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 967.632: [GC[YG occupancy: 58338 K (118016 K)]967.632: [Rescan
>>> (parallel) , 0.0061170 secs]967.638: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 71188K(139444K), 0.0062190 secs]
>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>> 967.638: [CMS-concurrent-sweep-start]
>>> 967.640: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 967.640: [CMS-concurrent-reset-start]
>>> 967.648: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 969.648: [GC [1 CMS-initial-mark: 12849K(21428K)] 71316K(139444K),
>>> 0.0081110 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 969.656: [CMS-concurrent-mark-start]
>>> 969.674: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 969.674: [CMS-concurrent-preclean-start]
>>> 969.674: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 969.674: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 974.677:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 974.677: [GC[YG occupancy: 58786 K (118016 K)]974.677: [Rescan
>>> (parallel) , 0.0070810 secs]974.685: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 71636K(139444K), 0.0072050 secs]
>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>> 974.685: [CMS-concurrent-sweep-start]
>>> 974.686: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 974.686: [CMS-concurrent-reset-start]
>>> 974.695: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 976.696: [GC [1 CMS-initial-mark: 12849K(21428K)] 71764K(139444K),
>>> 0.0080650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 976.704: [CMS-concurrent-mark-start]
>>> 976.719: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 976.719: [CMS-concurrent-preclean-start]
>>> 976.719: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 976.719: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 981.727:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>> user=0.69 sys=0.01, real=5.01 secs]
>>> 981.727: [GC[YG occupancy: 59235 K (118016 K)]981.727: [Rescan
>>> (parallel) , 0.0066570 secs]981.734: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 72085K(139444K), 0.0067620 secs]
>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>> 981.734: [CMS-concurrent-sweep-start]
>>> 981.736: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 981.736: [CMS-concurrent-reset-start]
>>> 981.745: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 983.745: [GC [1 CMS-initial-mark: 12849K(21428K)] 72213K(139444K),
>>> 0.0081400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 983.753: [CMS-concurrent-mark-start]
>>> 983.769: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 983.769: [CMS-concurrent-preclean-start]
>>> 983.769: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 983.769: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 988.840:
>>> [CMS-concurrent-abortable-preclean: 0.716/5.071 secs] [Times:
>>> user=0.71 sys=0.00, real=5.07 secs]
>>> 988.840: [GC[YG occupancy: 59683 K (118016 K)]988.840: [Rescan
>>> (parallel) , 0.0076020 secs]988.848: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 72533K(139444K), 0.0077100 secs]
>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>> 988.848: [CMS-concurrent-sweep-start]
>>> 988.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 988.850: [CMS-concurrent-reset-start]
>>> 988.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 990.858: [GC [1 CMS-initial-mark: 12849K(21428K)] 72661K(139444K),
>>> 0.0081810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 990.867: [CMS-concurrent-mark-start]
>>> 990.884: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 990.884: [CMS-concurrent-preclean-start]
>>> 990.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 990.885: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 995.999:
>>> [CMS-concurrent-abortable-preclean: 0.721/5.114 secs] [Times:
>>> user=0.73 sys=0.00, real=5.11 secs]
>>> 995.999: [GC[YG occupancy: 60307 K (118016 K)]995.999: [Rescan
>>> (parallel) , 0.0058190 secs]996.005: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 73156K(139444K), 0.0059260 secs]
>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>> 996.005: [CMS-concurrent-sweep-start]
>>> 996.007: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 996.007: [CMS-concurrent-reset-start]
>>> 996.016: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 998.016: [GC [1 CMS-initial-mark: 12849K(21428K)] 73285K(139444K),
>>> 0.0052760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 998.022: [CMS-concurrent-mark-start]
>>> 998.038: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 998.038: [CMS-concurrent-preclean-start]
>>> 998.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 998.039: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1003.048:
>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1003.048: [GC[YG occupancy: 60755 K (118016 K)]1003.048: [Rescan
>>> (parallel) , 0.0068040 secs]1003.055: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 73605K(139444K), 0.0069060 secs]
>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>> 1003.055: [CMS-concurrent-sweep-start]
>>> 1003.057: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1003.057: [CMS-concurrent-reset-start]
>>> 1003.066: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1005.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 73733K(139444K),
>>> 0.0082200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1005.075: [CMS-concurrent-mark-start]
>>> 1005.090: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1005.090: [CMS-concurrent-preclean-start]
>>> 1005.090: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1005.090: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1010.094:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 1010.094: [GC[YG occupancy: 61203 K (118016 K)]1010.094: [Rescan
>>> (parallel) , 0.0066010 secs]1010.101: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 74053K(139444K), 0.0067120 secs]
>>> [Times: user=0.08 sys=0.00, real=0.00 secs]
>>> 1010.101: [CMS-concurrent-sweep-start]
>>> 1010.103: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1010.103: [CMS-concurrent-reset-start]
>>> 1010.112: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1012.113: [GC [1 CMS-initial-mark: 12849K(21428K)] 74181K(139444K),
>>> 0.0083460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1012.121: [CMS-concurrent-mark-start]
>>> 1012.137: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1012.137: [CMS-concurrent-preclean-start]
>>> 1012.138: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1012.138: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1017.144:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 1017.144: [GC[YG occupancy: 61651 K (118016 K)]1017.144: [Rescan
>>> (parallel) , 0.0058810 secs]1017.150: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 74501K(139444K), 0.0059830 secs]
>>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>>> 1017.151: [CMS-concurrent-sweep-start]
>>> 1017.153: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1017.153: [CMS-concurrent-reset-start]
>>> 1017.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1019.162: [GC [1 CMS-initial-mark: 12849K(21428K)] 74629K(139444K),
>>> 0.0083310 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1019.171: [CMS-concurrent-mark-start]
>>> 1019.187: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1019.187: [CMS-concurrent-preclean-start]
>>> 1019.187: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1019.187: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1024.261:
>>> [CMS-concurrent-abortable-preclean: 0.717/5.074 secs] [Times:
>>> user=0.72 sys=0.00, real=5.07 secs]
>>> 1024.261: [GC[YG occupancy: 62351 K (118016 K)]1024.262: [Rescan
>>> (parallel) , 0.0069720 secs]1024.269: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 75200K(139444K), 0.0070750 secs]
>>> [Times: user=0.08 sys=0.01, real=0.01 secs]
>>> 1024.269: [CMS-concurrent-sweep-start]
>>> 1024.270: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1024.270: [CMS-concurrent-reset-start]
>>> 1024.278: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1026.279: [GC [1 CMS-initial-mark: 12849K(21428K)] 75329K(139444K),
>>> 0.0086360 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1026.288: [CMS-concurrent-mark-start]
>>> 1026.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1026.305: [CMS-concurrent-preclean-start]
>>> 1026.305: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1026.305: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1031.308:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1031.308: [GC[YG occupancy: 62799 K (118016 K)]1031.308: [Rescan
>>> (parallel) , 0.0069330 secs]1031.315: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 75649K(139444K), 0.0070380 secs]
>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>> 1031.315: [CMS-concurrent-sweep-start]
>>> 1031.316: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1031.316: [CMS-concurrent-reset-start]
>>> 1031.326: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1033.326: [GC [1 CMS-initial-mark: 12849K(21428K)] 75777K(139444K),
>>> 0.0085850 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1033.335: [CMS-concurrent-mark-start]
>>> 1033.350: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1033.350: [CMS-concurrent-preclean-start]
>>> 1033.351: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1033.351: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1038.357:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.69 sys=0.01, real=5.01 secs]
>>> 1038.358: [GC[YG occupancy: 63247 K (118016 K)]1038.358: [Rescan
>>> (parallel) , 0.0071860 secs]1038.365: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 76097K(139444K), 0.0072900 secs]
>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>> 1038.365: [CMS-concurrent-sweep-start]
>>> 1038.367: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1038.367: [CMS-concurrent-reset-start]
>>> 1038.376: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1040.376: [GC [1 CMS-initial-mark: 12849K(21428K)] 76225K(139444K),
>>> 0.0085910 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1040.385: [CMS-concurrent-mark-start]
>>> 1040.401: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1040.401: [CMS-concurrent-preclean-start]
>>> 1040.401: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1040.401: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1045.411:
>>> [CMS-concurrent-abortable-preclean: 0.705/5.010 secs] [Times:
>>> user=0.69 sys=0.01, real=5.01 secs]
>>> 1045.412: [GC[YG occupancy: 63695 K (118016 K)]1045.412: [Rescan
>>> (parallel) , 0.0082050 secs]1045.420: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 76545K(139444K), 0.0083110 secs]
>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>> 1045.420: [CMS-concurrent-sweep-start]
>>> 1045.421: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1045.421: [CMS-concurrent-reset-start]
>>> 1045.430: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1047.430: [GC [1 CMS-initial-mark: 12849K(21428K)] 76673K(139444K),
>>> 0.0086110 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1047.439: [CMS-concurrent-mark-start]
>>> 1047.456: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1047.456: [CMS-concurrent-preclean-start]
>>> 1047.456: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1047.456: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1052.462:
>>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 1052.462: [GC[YG occupancy: 64144 K (118016 K)]1052.462: [Rescan
>>> (parallel) , 0.0087770 secs]1052.471: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 76994K(139444K), 0.0088770 secs]
>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>> 1052.471: [CMS-concurrent-sweep-start]
>>> 1052.472: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1052.472: [CMS-concurrent-reset-start]
>>> 1052.481: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1052.628: [GC [1 CMS-initial-mark: 12849K(21428K)] 77058K(139444K),
>>> 0.0086170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1052.637: [CMS-concurrent-mark-start]
>>> 1052.655: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1052.655: [CMS-concurrent-preclean-start]
>>> 1052.656: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1052.656: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1057.658:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1057.658: [GC[YG occupancy: 64569 K (118016 K)]1057.658: [Rescan
>>> (parallel) , 0.0072850 secs]1057.665: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 77418K(139444K), 0.0073880 secs]
>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>> 1057.666: [CMS-concurrent-sweep-start]
>>> 1057.668: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1057.668: [CMS-concurrent-reset-start]
>>> 1057.677: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1059.677: [GC [1 CMS-initial-mark: 12849K(21428K)] 77547K(139444K),
>>> 0.0086820 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1059.686: [CMS-concurrent-mark-start]
>>> 1059.703: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1059.703: [CMS-concurrent-preclean-start]
>>> 1059.703: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1059.703: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1064.712:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1064.712: [GC[YG occupancy: 65017 K (118016 K)]1064.712: [Rescan
>>> (parallel) , 0.0071630 secs]1064.720: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 77867K(139444K), 0.0072700 secs]
>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>> 1064.720: [CMS-concurrent-sweep-start]
>>> 1064.722: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1064.722: [CMS-concurrent-reset-start]
>>> 1064.731: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1066.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 77995K(139444K),
>>> 0.0087640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1066.740: [CMS-concurrent-mark-start]
>>> 1066.757: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1066.757: [CMS-concurrent-preclean-start]
>>> 1066.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1066.757: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1071.821:
>>> [CMS-concurrent-abortable-preclean: 0.714/5.064 secs] [Times:
>>> user=0.71 sys=0.00, real=5.06 secs]
>>> 1071.822: [GC[YG occupancy: 65465 K (118016 K)]1071.822: [Rescan
>>> (parallel) , 0.0056280 secs]1071.827: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 78315K(139444K), 0.0057430 secs]
>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>> 1071.828: [CMS-concurrent-sweep-start]
>>> 1071.830: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1071.830: [CMS-concurrent-reset-start]
>>> 1071.839: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1073.839: [GC [1 CMS-initial-mark: 12849K(21428K)] 78443K(139444K),
>>> 0.0087570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1073.848: [CMS-concurrent-mark-start]
>>> 1073.865: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1073.865: [CMS-concurrent-preclean-start]
>>> 1073.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1073.865: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1078.868:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1078.868: [GC[YG occupancy: 65914 K (118016 K)]1078.868: [Rescan
>>> (parallel) , 0.0055280 secs]1078.873: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 78763K(139444K), 0.0056320 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 1078.874: [CMS-concurrent-sweep-start]
>>> 1078.875: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1078.875: [CMS-concurrent-reset-start]
>>> 1078.884: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1080.884: [GC [1 CMS-initial-mark: 12849K(21428K)] 78892K(139444K),
>>> 0.0088520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1080.893: [CMS-concurrent-mark-start]
>>> 1080.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1080.909: [CMS-concurrent-preclean-start]
>>> 1080.909: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1080.909: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1086.021:
>>> [CMS-concurrent-abortable-preclean: 0.714/5.112 secs] [Times:
>>> user=0.72 sys=0.00, real=5.11 secs]
>>> 1086.021: [GC[YG occupancy: 66531 K (118016 K)]1086.022: [Rescan
>>> (parallel) , 0.0075330 secs]1086.029: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 79381K(139444K), 0.0076440 secs]
>>> [Times: user=0.09 sys=0.01, real=0.01 secs]
>>> 1086.029: [CMS-concurrent-sweep-start]
>>> 1086.031: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1086.031: [CMS-concurrent-reset-start]
>>> 1086.041: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1088.041: [GC [1 CMS-initial-mark: 12849K(21428K)] 79509K(139444K),
>>> 0.0091350 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1088.050: [CMS-concurrent-mark-start]
>>> 1088.066: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1088.067: [CMS-concurrent-preclean-start]
>>> 1088.067: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1088.067: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1093.070:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1093.071: [GC[YG occupancy: 66980 K (118016 K)]1093.071: [Rescan
>>> (parallel) , 0.0051870 secs]1093.076: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 79830K(139444K), 0.0052930 secs]
>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>> 1093.076: [CMS-concurrent-sweep-start]
>>> 1093.078: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1093.078: [CMS-concurrent-reset-start]
>>> 1093.087: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1095.088: [GC [1 CMS-initial-mark: 12849K(21428K)] 79958K(139444K),
>>> 0.0091350 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1095.097: [CMS-concurrent-mark-start]
>>> 1095.114: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1095.114: [CMS-concurrent-preclean-start]
>>> 1095.115: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1095.115: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1100.121:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>>> user=0.69 sys=0.01, real=5.00 secs]
>>> 1100.121: [GC[YG occupancy: 67428 K (118016 K)]1100.122: [Rescan
>>> (parallel) , 0.0068510 secs]1100.128: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 80278K(139444K), 0.0069510 secs]
>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>> 1100.129: [CMS-concurrent-sweep-start]
>>> 1100.130: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1100.130: [CMS-concurrent-reset-start]
>>> 1100.138: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1102.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 80406K(139444K),
>>> 0.0090760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1102.148: [CMS-concurrent-mark-start]
>>> 1102.165: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1102.165: [CMS-concurrent-preclean-start]
>>> 1102.165: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1102.165: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1107.168:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1107.168: [GC[YG occupancy: 67876 K (118016 K)]1107.168: [Rescan
>>> (parallel) , 0.0076420 secs]1107.176: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 80726K(139444K), 0.0077500 secs]
>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>> 1107.176: [CMS-concurrent-sweep-start]
>>> 1107.178: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1107.178: [CMS-concurrent-reset-start]
>>> 1107.187: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1109.188: [GC [1 CMS-initial-mark: 12849K(21428K)] 80854K(139444K),
>>> 0.0091510 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1109.197: [CMS-concurrent-mark-start]
>>> 1109.214: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1109.214: [CMS-concurrent-preclean-start]
>>> 1109.214: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1109.214: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1114.290:
>>> [CMS-concurrent-abortable-preclean: 0.711/5.076 secs] [Times:
>>> user=0.72 sys=0.00, real=5.07 secs]
>>> 1114.290: [GC[YG occupancy: 68473 K (118016 K)]1114.290: [Rescan
>>> (parallel) , 0.0084730 secs]1114.299: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 81322K(139444K), 0.0085810 secs]
>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>> 1114.299: [CMS-concurrent-sweep-start]
>>> 1114.301: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1114.301: [CMS-concurrent-reset-start]
>>> 1114.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1115.803: [GC [1 CMS-initial-mark: 12849K(21428K)] 81451K(139444K),
>>> 0.0106050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1115.814: [CMS-concurrent-mark-start]
>>> 1115.830: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1115.830: [CMS-concurrent-preclean-start]
>>> 1115.831: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1115.831: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1120.839:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1120.839: [GC[YG occupancy: 68921 K (118016 K)]1120.839: [Rescan
>>> (parallel) , 0.0088800 secs]1120.848: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 81771K(139444K), 0.0089910 secs]
>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>> 1120.848: [CMS-concurrent-sweep-start]
>>> 1120.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1120.850: [CMS-concurrent-reset-start]
>>> 1120.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1122.859: [GC [1 CMS-initial-mark: 12849K(21428K)] 81899K(139444K),
>>> 0.0092280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1122.868: [CMS-concurrent-mark-start]
>>> 1122.885: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1122.885: [CMS-concurrent-preclean-start]
>>> 1122.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1122.885: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1127.888:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.69 sys=0.01, real=5.00 secs]
>>> 1127.888: [GC[YG occupancy: 69369 K (118016 K)]1127.888: [Rescan
>>> (parallel) , 0.0087740 secs]1127.897: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 82219K(139444K), 0.0088850 secs]
>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>> 1127.897: [CMS-concurrent-sweep-start]
>>> 1127.898: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1127.898: [CMS-concurrent-reset-start]
>>> 1127.906: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1129.907: [GC [1 CMS-initial-mark: 12849K(21428K)] 82347K(139444K),
>>> 0.0092280 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1129.916: [CMS-concurrent-mark-start]
>>> 1129.933: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1129.933: [CMS-concurrent-preclean-start]
>>> 1129.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1129.934: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1134.938:
>>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1134.938: [GC[YG occupancy: 69818 K (118016 K)]1134.939: [Rescan
>>> (parallel) , 0.0078530 secs]1134.946: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 82667K(139444K), 0.0079630 secs]
>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>> 1134.947: [CMS-concurrent-sweep-start]
>>> 1134.948: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1134.948: [CMS-concurrent-reset-start]
>>> 1134.956: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1136.957: [GC [1 CMS-initial-mark: 12849K(21428K)] 82795K(139444K),
>>> 0.0092760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1136.966: [CMS-concurrent-mark-start]
>>> 1136.983: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.01 secs]
>>> 1136.983: [CMS-concurrent-preclean-start]
>>> 1136.984: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.01 secs]
>>> 1136.984: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1141.991:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1141.991: [GC[YG occupancy: 70266 K (118016 K)]1141.991: [Rescan
>>> (parallel) , 0.0090620 secs]1142.000: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 83116K(139444K), 0.0091700 secs]
>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>> 1142.000: [CMS-concurrent-sweep-start]
>>> 1142.002: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1142.002: [CMS-concurrent-reset-start]
>>> 1142.011: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1142.657: [GC [1 CMS-initial-mark: 12849K(21428K)] 83390K(139444K),
>>> 0.0094330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1142.667: [CMS-concurrent-mark-start]
>>> 1142.685: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1142.685: [CMS-concurrent-preclean-start]
>>> 1142.686: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1142.686: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1147.688:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1147.688: [GC[YG occupancy: 70901 K (118016 K)]1147.688: [Rescan
>>> (parallel) , 0.0081170 secs]1147.696: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 83751K(139444K), 0.0082390 secs]
>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>> 1147.697: [CMS-concurrent-sweep-start]
>>> 1147.698: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1147.698: [CMS-concurrent-reset-start]
>>> 1147.706: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1149.706: [GC [1 CMS-initial-mark: 12849K(21428K)] 83879K(139444K),
>>> 0.0095560 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1149.716: [CMS-concurrent-mark-start]
>>> 1149.734: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1149.734: [CMS-concurrent-preclean-start]
>>> 1149.734: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1149.734: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1154.741:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1154.741: [GC[YG occupancy: 71349 K (118016 K)]1154.741: [Rescan
>>> (parallel) , 0.0090720 secs]1154.750: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 84199K(139444K), 0.0091780 secs]
>>> [Times: user=0.10 sys=0.01, real=0.01 secs]
>>> 1154.750: [CMS-concurrent-sweep-start]
>>> 1154.752: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1154.752: [CMS-concurrent-reset-start]
>>> 1154.762: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1155.021: [GC [1 CMS-initial-mark: 12849K(21428K)] 84199K(139444K),
>>> 0.0094030 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1155.031: [CMS-concurrent-mark-start]
>>> 1155.047: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1155.047: [CMS-concurrent-preclean-start]
>>> 1155.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1155.047: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1160.056:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1160.056: [GC[YG occupancy: 71669 K (118016 K)]1160.056: [Rescan
>>> (parallel) , 0.0056520 secs]1160.062: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 84519K(139444K), 0.0057790 secs]
>>> [Times: user=0.07 sys=0.00, real=0.00 secs]
>>> 1160.062: [CMS-concurrent-sweep-start]
>>> 1160.064: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1160.064: [CMS-concurrent-reset-start]
>>> 1160.073: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1162.074: [GC [1 CMS-initial-mark: 12849K(21428K)] 84647K(139444K),
>>> 0.0095040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1162.083: [CMS-concurrent-mark-start]
>>> 1162.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1162.098: [CMS-concurrent-preclean-start]
>>> 1162.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1162.099: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1167.102:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1167.102: [GC[YG occupancy: 72118 K (118016 K)]1167.102: [Rescan
>>> (parallel) , 0.0072180 secs]1167.110: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 84968K(139444K), 0.0073300 secs]
>>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>>> 1167.110: [CMS-concurrent-sweep-start]
>>> 1167.112: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1167.112: [CMS-concurrent-reset-start]
>>> 1167.121: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1169.121: [GC [1 CMS-initial-mark: 12849K(21428K)] 85096K(139444K),
>>> 0.0096940 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1169.131: [CMS-concurrent-mark-start]
>>> 1169.147: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1169.147: [CMS-concurrent-preclean-start]
>>> 1169.147: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1169.147: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1174.197:
>>> [CMS-concurrent-abortable-preclean: 0.720/5.050 secs] [Times:
>>> user=0.72 sys=0.01, real=5.05 secs]
>>> 1174.198: [GC[YG occupancy: 72607 K (118016 K)]1174.198: [Rescan
>>> (parallel) , 0.0064910 secs]1174.204: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 85456K(139444K), 0.0065940 secs]
>>> [Times: user=0.06 sys=0.01, real=0.01 secs]
>>> 1174.204: [CMS-concurrent-sweep-start]
>>> 1174.206: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1174.206: [CMS-concurrent-reset-start]
>>> 1174.215: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1176.215: [GC [1 CMS-initial-mark: 12849K(21428K)] 85585K(139444K),
>>> 0.0095940 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1176.225: [CMS-concurrent-mark-start]
>>> 1176.240: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1176.240: [CMS-concurrent-preclean-start]
>>> 1176.241: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1176.241: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1181.244:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1181.244: [GC[YG occupancy: 73055 K (118016 K)]1181.244: [Rescan
>>> (parallel) , 0.0093030 secs]1181.254: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 85905K(139444K), 0.0094040 secs]
>>> [Times: user=0.09 sys=0.01, real=0.01 secs]
>>> 1181.254: [CMS-concurrent-sweep-start]
>>> 1181.256: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1181.256: [CMS-concurrent-reset-start]
>>> 1181.265: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1183.266: [GC [1 CMS-initial-mark: 12849K(21428K)] 86033K(139444K),
>>> 0.0096490 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1183.275: [CMS-concurrent-mark-start]
>>> 1183.293: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
>>> sys=0.00, real=0.02 secs]
>>> 1183.293: [CMS-concurrent-preclean-start]
>>> 1183.294: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1183.294: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1188.301:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1188.301: [GC[YG occupancy: 73503 K (118016 K)]1188.301: [Rescan
>>> (parallel) , 0.0092610 secs]1188.310: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 86353K(139444K), 0.0093750 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1188.310: [CMS-concurrent-sweep-start]
>>> 1188.312: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1188.312: [CMS-concurrent-reset-start]
>>> 1188.320: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1190.321: [GC [1 CMS-initial-mark: 12849K(21428K)] 86481K(139444K),
>>> 0.0097510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1190.331: [CMS-concurrent-mark-start]
>>> 1190.347: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1190.347: [CMS-concurrent-preclean-start]
>>> 1190.347: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1190.347: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1195.359:
>>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1195.359: [GC[YG occupancy: 73952 K (118016 K)]1195.359: [Rescan
>>> (parallel) , 0.0093210 secs]1195.368: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 86801K(139444K), 0.0094330 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1195.369: [CMS-concurrent-sweep-start]
>>> 1195.370: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1195.370: [CMS-concurrent-reset-start]
>>> 1195.378: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1196.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 88001K(139444K),
>>> 0.0099870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1196.553: [CMS-concurrent-mark-start]
>>> 1196.570: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1196.570: [CMS-concurrent-preclean-start]
>>> 1196.570: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1196.570: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1201.574:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1201.574: [GC[YG occupancy: 75472 K (118016 K)]1201.574: [Rescan
>>> (parallel) , 0.0096480 secs]1201.584: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 88322K(139444K), 0.0097500 secs]
>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>> 1201.584: [CMS-concurrent-sweep-start]
>>> 1201.586: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1201.586: [CMS-concurrent-reset-start]
>>> 1201.595: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1202.679: [GC [1 CMS-initial-mark: 12849K(21428K)] 88491K(139444K),
>>> 0.0099400 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1202.690: [CMS-concurrent-mark-start]
>>> 1202.708: [CMS-concurrent-mark: 0.016/0.019 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1202.708: [CMS-concurrent-preclean-start]
>>> 1202.709: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1202.709: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1207.718:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1207.718: [GC[YG occupancy: 76109 K (118016 K)]1207.718: [Rescan
>>> (parallel) , 0.0096360 secs]1207.727: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 88959K(139444K), 0.0097380 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1207.728: [CMS-concurrent-sweep-start]
>>> 1207.729: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1207.729: [CMS-concurrent-reset-start]
>>> 1207.737: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1209.738: [GC [1 CMS-initial-mark: 12849K(21428K)] 89087K(139444K),
>>> 0.0099440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1209.748: [CMS-concurrent-mark-start]
>>> 1209.765: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1209.765: [CMS-concurrent-preclean-start]
>>> 1209.765: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1209.765: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1214.797:
>>> [CMS-concurrent-abortable-preclean: 0.716/5.031 secs] [Times:
>>> user=0.72 sys=0.00, real=5.03 secs]
>>> 1214.797: [GC[YG occupancy: 76557 K (118016 K)]1214.797: [Rescan
>>> (parallel) , 0.0096280 secs]1214.807: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 89407K(139444K), 0.0097320 secs]
>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>> 1214.807: [CMS-concurrent-sweep-start]
>>> 1214.808: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1214.808: [CMS-concurrent-reset-start]
>>> 1214.816: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1216.817: [GC [1 CMS-initial-mark: 12849K(21428K)] 89535K(139444K),
>>> 0.0099640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1216.827: [CMS-concurrent-mark-start]
>>> 1216.844: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1216.844: [CMS-concurrent-preclean-start]
>>> 1216.844: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1216.844: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1221.847:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1221.847: [GC[YG occupancy: 77005 K (118016 K)]1221.847: [Rescan
>>> (parallel) , 0.0061810 secs]1221.854: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 89855K(139444K), 0.0062950 secs]
>>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>>> 1221.854: [CMS-concurrent-sweep-start]
>>> 1221.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1221.855: [CMS-concurrent-reset-start]
>>> 1221.864: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1223.865: [GC [1 CMS-initial-mark: 12849K(21428K)] 89983K(139444K),
>>> 0.0100430 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1223.875: [CMS-concurrent-mark-start]
>>> 1223.890: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1223.890: [CMS-concurrent-preclean-start]
>>> 1223.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1223.891: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1228.899:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1228.899: [GC[YG occupancy: 77454 K (118016 K)]1228.899: [Rescan
>>> (parallel) , 0.0095850 secs]1228.909: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 90304K(139444K), 0.0096960 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1228.909: [CMS-concurrent-sweep-start]
>>> 1228.911: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1228.911: [CMS-concurrent-reset-start]
>>> 1228.919: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1230.919: [GC [1 CMS-initial-mark: 12849K(21428K)] 90432K(139444K),
>>> 0.0101360 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1230.930: [CMS-concurrent-mark-start]
>>> 1230.946: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1230.946: [CMS-concurrent-preclean-start]
>>> 1230.947: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1230.947: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1235.952:
>>> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1235.952: [GC[YG occupancy: 77943 K (118016 K)]1235.952: [Rescan
>>> (parallel) , 0.0084420 secs]1235.961: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 90793K(139444K), 0.0085450 secs]
>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>> 1235.961: [CMS-concurrent-sweep-start]
>>> 1235.963: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1235.963: [CMS-concurrent-reset-start]
>>> 1235.972: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1237.973: [GC [1 CMS-initial-mark: 12849K(21428K)] 90921K(139444K),
>>> 0.0101280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1237.983: [CMS-concurrent-mark-start]
>>> 1237.998: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1237.998: [CMS-concurrent-preclean-start]
>>> 1237.999: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1237.999: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1243.008:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1243.008: [GC[YG occupancy: 78391 K (118016 K)]1243.008: [Rescan
>>> (parallel) , 0.0090510 secs]1243.017: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 91241K(139444K), 0.0091560 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1243.017: [CMS-concurrent-sweep-start]
>>> 1243.019: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1243.019: [CMS-concurrent-reset-start]
>>> 1243.027: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1245.027: [GC [1 CMS-initial-mark: 12849K(21428K)] 91369K(139444K),
>>> 0.0101080 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1245.038: [CMS-concurrent-mark-start]
>>> 1245.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1245.055: [CMS-concurrent-preclean-start]
>>> 1245.055: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1245.055: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1250.058:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1250.058: [GC[YG occupancy: 78839 K (118016 K)]1250.058: [Rescan
>>> (parallel) , 0.0096920 secs]1250.068: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 91689K(139444K), 0.0098040 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1250.068: [CMS-concurrent-sweep-start]
>>> 1250.070: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1250.070: [CMS-concurrent-reset-start]
>>> 1250.078: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1252.078: [GC [1 CMS-initial-mark: 12849K(21428K)] 91817K(139444K),
>>> 0.0102560 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1252.089: [CMS-concurrent-mark-start]
>>> 1252.105: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1252.105: [CMS-concurrent-preclean-start]
>>> 1252.106: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1252.106: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1257.113:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1257.113: [GC[YG occupancy: 79288 K (118016 K)]1257.113: [Rescan
>>> (parallel) , 0.0089920 secs]1257.122: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 92137K(139444K), 0.0090960 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1257.122: [CMS-concurrent-sweep-start]
>>> 1257.124: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1257.124: [CMS-concurrent-reset-start]
>>> 1257.133: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1259.134: [GC [1 CMS-initial-mark: 12849K(21428K)] 92266K(139444K),
>>> 0.0101720 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1259.144: [CMS-concurrent-mark-start]
>>> 1259.159: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 1259.159: [CMS-concurrent-preclean-start]
>>> 1259.159: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1259.159: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1264.229:
>>> [CMS-concurrent-abortable-preclean: 0.716/5.070 secs] [Times:
>>> user=0.72 sys=0.01, real=5.07 secs]
>>> 1264.229: [GC[YG occupancy: 79881 K (118016 K)]1264.229: [Rescan
>>> (parallel) , 0.0101320 secs]1264.240: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 92731K(139444K), 0.0102440 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1264.240: [CMS-concurrent-sweep-start]
>>> 1264.241: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1264.241: [CMS-concurrent-reset-start]
>>> 1264.250: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1266.250: [GC [1 CMS-initial-mark: 12849K(21428K)] 92859K(139444K),
>>> 0.0105180 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1266.261: [CMS-concurrent-mark-start]
>>> 1266.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1266.277: [CMS-concurrent-preclean-start]
>>> 1266.277: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1266.277: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1271.285:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1271.285: [GC[YG occupancy: 80330 K (118016 K)]1271.285: [Rescan
>>> (parallel) , 0.0094600 secs]1271.295: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 93180K(139444K), 0.0095600 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1271.295: [CMS-concurrent-sweep-start]
>>> 1271.297: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1271.297: [CMS-concurrent-reset-start]
>>> 1271.306: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1273.306: [GC [1 CMS-initial-mark: 12849K(21428K)] 93308K(139444K),
>>> 0.0104100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1273.317: [CMS-concurrent-mark-start]
>>> 1273.334: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1273.334: [CMS-concurrent-preclean-start]
>>> 1273.335: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1273.335: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1278.341:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1278.341: [GC[YG occupancy: 80778 K (118016 K)]1278.341: [Rescan
>>> (parallel) , 0.0101320 secs]1278.351: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 93628K(139444K), 0.0102460 secs]
>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>> 1278.351: [CMS-concurrent-sweep-start]
>>> 1278.353: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1278.353: [CMS-concurrent-reset-start]
>>> 1278.362: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1280.362: [GC [1 CMS-initial-mark: 12849K(21428K)] 93756K(139444K),
>>> 0.0105680 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1280.373: [CMS-concurrent-mark-start]
>>> 1280.388: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1280.388: [CMS-concurrent-preclean-start]
>>> 1280.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1280.388: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1285.400:
>>> [CMS-concurrent-abortable-preclean: 0.706/5.012 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1285.400: [GC[YG occupancy: 81262 K (118016 K)]1285.400: [Rescan
>>> (parallel) , 0.0093660 secs]1285.410: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 94111K(139444K), 0.0094820 secs]
>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>> 1285.410: [CMS-concurrent-sweep-start]
>>> 1285.411: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1285.411: [CMS-concurrent-reset-start]
>>> 1285.420: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1287.420: [GC [1 CMS-initial-mark: 12849K(21428K)] 94240K(139444K),
>>> 0.0105800 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1287.431: [CMS-concurrent-mark-start]
>>> 1287.447: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1287.447: [CMS-concurrent-preclean-start]
>>> 1287.447: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1287.447: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1292.460:
>>> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1292.460: [GC[YG occupancy: 81710 K (118016 K)]1292.460: [Rescan
>>> (parallel) , 0.0081130 secs]1292.468: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 94560K(139444K), 0.0082210 secs]
>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>> 1292.468: [CMS-concurrent-sweep-start]
>>> 1292.470: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1292.470: [CMS-concurrent-reset-start]
>>> 1292.480: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1292.712: [GC [1 CMS-initial-mark: 12849K(21428K)] 94624K(139444K),
>>> 0.0104870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1292.723: [CMS-concurrent-mark-start]
>>> 1292.739: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1292.739: [CMS-concurrent-preclean-start]
>>> 1292.740: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1292.740: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1297.748:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1297.748: [GC[YG occupancy: 82135 K (118016 K)]1297.748: [Rescan
>>> (parallel) , 0.0106180 secs]1297.759: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 94985K(139444K), 0.0107410 secs]
>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>> 1297.759: [CMS-concurrent-sweep-start]
>>> 1297.760: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1297.761: [CMS-concurrent-reset-start]
>>> 1297.769: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1299.769: [GC [1 CMS-initial-mark: 12849K(21428K)] 95113K(139444K),
>>> 0.0105340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1299.780: [CMS-concurrent-mark-start]
>>> 1299.796: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1299.796: [CMS-concurrent-preclean-start]
>>> 1299.797: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1299.797: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1304.805:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>> user=0.69 sys=0.00, real=5.01 secs]
>>> 1304.805: [GC[YG occupancy: 82583 K (118016 K)]1304.806: [Rescan
>>> (parallel) , 0.0094010 secs]1304.815: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 95433K(139444K), 0.0095140 secs]
>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>> 1304.815: [CMS-concurrent-sweep-start]
>>> 1304.817: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1304.817: [CMS-concurrent-reset-start]
>>> 1304.827: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1306.827: [GC [1 CMS-initial-mark: 12849K(21428K)] 95561K(139444K),
>>> 0.0107300 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1306.838: [CMS-concurrent-mark-start]
>>> 1306.855: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1306.855: [CMS-concurrent-preclean-start]
>>> 1306.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1306.855: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1311.858:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1311.858: [GC[YG occupancy: 83032 K (118016 K)]1311.858: [Rescan
>>> (parallel) , 0.0094210 secs]1311.867: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 95882K(139444K), 0.0095360 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1311.868: [CMS-concurrent-sweep-start]
>>> 1311.869: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1311.869: [CMS-concurrent-reset-start]
>>> 1311.877: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1313.878: [GC [1 CMS-initial-mark: 12849K(21428K)] 96010K(139444K),
>>> 0.0107870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1313.889: [CMS-concurrent-mark-start]
>>> 1313.905: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1313.905: [CMS-concurrent-preclean-start]
>>> 1313.906: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1313.906: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1318.914:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1318.915: [GC[YG occupancy: 83481 K (118016 K)]1318.915: [Rescan
>>> (parallel) , 0.0096280 secs]1318.924: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 96331K(139444K), 0.0097340 secs]
>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>> 1318.925: [CMS-concurrent-sweep-start]
>>> 1318.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1318.927: [CMS-concurrent-reset-start]
>>> 1318.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1320.936: [GC [1 CMS-initial-mark: 12849K(21428K)] 96459K(139444K),
>>> 0.0106300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1320.947: [CMS-concurrent-mark-start]
>>> 1320.964: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1320.964: [CMS-concurrent-preclean-start]
>>> 1320.965: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1320.965: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1325.991:
>>> [CMS-concurrent-abortable-preclean: 0.717/5.026 secs] [Times:
>>> user=0.73 sys=0.00, real=5.02 secs]
>>> 1325.991: [GC[YG occupancy: 84205 K (118016 K)]1325.991: [Rescan
>>> (parallel) , 0.0097880 secs]1326.001: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 97055K(139444K), 0.0099010 secs]
>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>> 1326.001: [CMS-concurrent-sweep-start]
>>> 1326.003: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1326.003: [CMS-concurrent-reset-start]
>>> 1326.012: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1328.013: [GC [1 CMS-initial-mark: 12849K(21428K)] 97183K(139444K),
>>> 0.0109730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1328.024: [CMS-concurrent-mark-start]
>>> 1328.039: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1328.039: [CMS-concurrent-preclean-start]
>>> 1328.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1328.039: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1333.043:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 1333.043: [GC[YG occupancy: 84654 K (118016 K)]1333.043: [Rescan
>>> (parallel) , 0.0110740 secs]1333.054: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 97504K(139444K), 0.0111760 secs]
>>> [Times: user=0.12 sys=0.01, real=0.02 secs]
>>> 1333.054: [CMS-concurrent-sweep-start]
>>> 1333.056: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1333.056: [CMS-concurrent-reset-start]
>>> 1333.065: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1335.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 97632K(139444K),
>>> 0.0109300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1335.077: [CMS-concurrent-mark-start]
>>> 1335.094: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1335.094: [CMS-concurrent-preclean-start]
>>> 1335.094: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1335.094: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1340.103:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1340.103: [GC[YG occupancy: 85203 K (118016 K)]1340.103: [Rescan
>>> (parallel) , 0.0109470 secs]1340.114: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 98052K(139444K), 0.0110500 secs]
>>> [Times: user=0.11 sys=0.01, real=0.02 secs]
>>> 1340.114: [CMS-concurrent-sweep-start]
>>> 1340.116: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1340.116: [CMS-concurrent-reset-start]
>>> 1340.125: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1342.126: [GC [1 CMS-initial-mark: 12849K(21428K)] 98181K(139444K),
>>> 0.0109170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1342.137: [CMS-concurrent-mark-start]
>>> 1342.154: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1342.154: [CMS-concurrent-preclean-start]
>>> 1342.154: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1342.154: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1347.161:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1347.162: [GC[YG occupancy: 85652 K (118016 K)]1347.162: [Rescan
>>> (parallel) , 0.0075610 secs]1347.169: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 98502K(139444K), 0.0076680 secs]
>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>> 1347.169: [CMS-concurrent-sweep-start]
>>> 1347.171: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1347.172: [CMS-concurrent-reset-start]
>>> 1347.181: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1349.181: [GC [1 CMS-initial-mark: 12849K(21428K)] 98630K(139444K),
>>> 0.0109540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1349.192: [CMS-concurrent-mark-start]
>>> 1349.208: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1349.208: [CMS-concurrent-preclean-start]
>>> 1349.208: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1349.208: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1354.268:
>>> [CMS-concurrent-abortable-preclean: 0.723/5.060 secs] [Times:
>>> user=0.73 sys=0.00, real=5.06 secs]
>>> 1354.268: [GC[YG occupancy: 86241 K (118016 K)]1354.268: [Rescan
>>> (parallel) , 0.0099530 secs]1354.278: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 99091K(139444K), 0.0100670 secs]
>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>> 1354.278: [CMS-concurrent-sweep-start]
>>> 1354.280: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1354.280: [CMS-concurrent-reset-start]
>>> 1354.288: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1356.289: [GC [1 CMS-initial-mark: 12849K(21428K)] 99219K(139444K),
>>> 0.0111450 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1356.300: [CMS-concurrent-mark-start]
>>> 1356.316: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1356.316: [CMS-concurrent-preclean-start]
>>> 1356.317: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1356.317: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1361.322:
>>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1361.322: [GC[YG occupancy: 86690 K (118016 K)]1361.322: [Rescan
>>> (parallel) , 0.0097180 secs]1361.332: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 99540K(139444K), 0.0098210 secs]
>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>> 1361.332: [CMS-concurrent-sweep-start]
>>> 1361.333: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1361.333: [CMS-concurrent-reset-start]
>>> 1361.342: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1363.342: [GC [1 CMS-initial-mark: 12849K(21428K)] 99668K(139444K),
>>> 0.0110230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1363.354: [CMS-concurrent-mark-start]
>>> 1363.368: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1363.368: [CMS-concurrent-preclean-start]
>>> 1363.369: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1363.369: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1368.378:
>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1368.378: [GC[YG occupancy: 87139 K (118016 K)]1368.378: [Rescan
>>> (parallel) , 0.0100770 secs]1368.388: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 99989K(139444K), 0.0101900 secs]
>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>> 1368.388: [CMS-concurrent-sweep-start]
>>> 1368.390: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1368.390: [CMS-concurrent-reset-start]
>>> 1368.398: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1370.399: [GC [1 CMS-initial-mark: 12849K(21428K)] 100117K(139444K),
>>> 0.0111810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1370.410: [CMS-concurrent-mark-start]
>>> 1370.426: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1370.426: [CMS-concurrent-preclean-start]
>>> 1370.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1370.427: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1375.447:
>>> [CMS-concurrent-abortable-preclean: 0.715/5.020 secs] [Times:
>>> user=0.72 sys=0.00, real=5.02 secs]
>>> 1375.447: [GC[YG occupancy: 87588 K (118016 K)]1375.447: [Rescan
>>> (parallel) , 0.0101690 secs]1375.457: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 100438K(139444K), 0.0102730 secs]
>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>> 1375.457: [CMS-concurrent-sweep-start]
>>> 1375.459: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1375.459: [CMS-concurrent-reset-start]
>>> 1375.467: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1377.467: [GC [1 CMS-initial-mark: 12849K(21428K)] 100566K(139444K),
>>> 0.0110760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1377.478: [CMS-concurrent-mark-start]
>>> 1377.495: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1377.495: [CMS-concurrent-preclean-start]
>>> 1377.496: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1377.496: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1382.502:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.69 sys=0.01, real=5.00 secs]
>>> 1382.502: [GC[YG occupancy: 89213 K (118016 K)]1382.502: [Rescan
>>> (parallel) , 0.0108630 secs]1382.513: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 102063K(139444K), 0.0109700 secs]
>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>> 1382.513: [CMS-concurrent-sweep-start]
>>> 1382.514: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1382.514: [CMS-concurrent-reset-start]
>>> 1382.523: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1382.743: [GC [1 CMS-initial-mark: 12849K(21428K)] 102127K(139444K),
>>> 0.0113140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1382.755: [CMS-concurrent-mark-start]
>>> 1382.773: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1382.773: [CMS-concurrent-preclean-start]
>>> 1382.774: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1382.774: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1387.777:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1387.777: [GC[YG occupancy: 89638 K (118016 K)]1387.777: [Rescan
>>> (parallel) , 0.0113310 secs]1387.789: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 102488K(139444K), 0.0114430 secs]
>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>> 1387.789: [CMS-concurrent-sweep-start]
>>> 1387.790: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1387.790: [CMS-concurrent-reset-start]
>>> 1387.799: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1389.799: [GC [1 CMS-initial-mark: 12849K(21428K)] 102617K(139444K),
>>> 0.0113540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1389.810: [CMS-concurrent-mark-start]
>>> 1389.827: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1389.827: [CMS-concurrent-preclean-start]
>>> 1389.827: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1389.827: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1394.831:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1394.831: [GC[YG occupancy: 90088 K (118016 K)]1394.831: [Rescan
>>> (parallel) , 0.0103790 secs]1394.841: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 102938K(139444K), 0.0104960 secs]
>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>> 1394.842: [CMS-concurrent-sweep-start]
>>> 1394.844: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1394.844: [CMS-concurrent-reset-start]
>>> 1394.853: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1396.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 103066K(139444K),
>>> 0.0114740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1396.865: [CMS-concurrent-mark-start]
>>> 1396.880: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1396.880: [CMS-concurrent-preclean-start]
>>> 1396.881: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1396.881: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1401.890:
>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1401.890: [GC[YG occupancy: 90537 K (118016 K)]1401.891: [Rescan
>>> (parallel) , 0.0116110 secs]1401.902: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 103387K(139444K), 0.0117240 secs]
>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>> 1401.902: [CMS-concurrent-sweep-start]
>>> 1401.904: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1401.904: [CMS-concurrent-reset-start]
>>> 1401.914: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1403.914: [GC [1 CMS-initial-mark: 12849K(21428K)] 103515K(139444K),
>>> 0.0111980 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1403.925: [CMS-concurrent-mark-start]
>>> 1403.943: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.01 secs]
>>> 1403.943: [CMS-concurrent-preclean-start]
>>> 1403.944: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.01 secs]
>>> 1403.944: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1408.982:
>>> [CMS-concurrent-abortable-preclean: 0.718/5.038 secs] [Times:
>>> user=0.72 sys=0.00, real=5.03 secs]
>>> 1408.982: [GC[YG occupancy: 90986 K (118016 K)]1408.982: [Rescan
>>> (parallel) , 0.0115260 secs]1408.994: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 103836K(139444K), 0.0116320 secs]
>>> [Times: user=0.13 sys=0.00, real=0.02 secs]
>>> 1408.994: [CMS-concurrent-sweep-start]
>>> 1408.996: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1408.996: [CMS-concurrent-reset-start]
>>> 1409.005: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1411.005: [GC [1 CMS-initial-mark: 12849K(21428K)] 103964K(139444K),
>>> 0.0114590 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1411.017: [CMS-concurrent-mark-start]
>>> 1411.034: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1411.034: [CMS-concurrent-preclean-start]
>>> 1411.034: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1411.034: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1416.140:
>>> [CMS-concurrent-abortable-preclean: 0.712/5.105 secs] [Times:
>>> user=0.71 sys=0.00, real=5.10 secs]
>>> 1416.140: [GC[YG occupancy: 91476 K (118016 K)]1416.140: [Rescan
>>> (parallel) , 0.0114950 secs]1416.152: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 104326K(139444K), 0.0116020 secs]
>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>> 1416.152: [CMS-concurrent-sweep-start]
>>> 1416.154: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1416.154: [CMS-concurrent-reset-start]
>>> 1416.163: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1418.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 104454K(139444K),
>>> 0.0114040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1418.175: [CMS-concurrent-mark-start]
>>> 1418.191: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1418.191: [CMS-concurrent-preclean-start]
>>> 1418.191: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1418.191: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1423.198:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1423.199: [GC[YG occupancy: 91925 K (118016 K)]1423.199: [Rescan
>>> (parallel) , 0.0105460 secs]1423.209: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 104775K(139444K), 0.0106640 secs]
>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>> 1423.209: [CMS-concurrent-sweep-start]
>>> 1423.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1423.211: [CMS-concurrent-reset-start]
>>> 1423.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1425.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 104903K(139444K),
>>> 0.0116300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1425.232: [CMS-concurrent-mark-start]
>>> 1425.248: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1425.248: [CMS-concurrent-preclean-start]
>>> 1425.248: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1425.248: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1430.252:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 1430.252: [GC[YG occupancy: 92374 K (118016 K)]1430.252: [Rescan
>>> (parallel) , 0.0098720 secs]1430.262: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 105224K(139444K), 0.0099750 secs]
>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>> 1430.262: [CMS-concurrent-sweep-start]
>>> 1430.264: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1430.264: [CMS-concurrent-reset-start]
>>> 1430.273: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1432.274: [GC [1 CMS-initial-mark: 12849K(21428K)] 105352K(139444K),
>>> 0.0114050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1432.285: [CMS-concurrent-mark-start]
>>> 1432.301: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1432.301: [CMS-concurrent-preclean-start]
>>> 1432.301: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1432.301: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1437.304:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 1437.305: [GC[YG occupancy: 92823 K (118016 K)]1437.305: [Rescan
>>> (parallel) , 0.0115010 secs]1437.316: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 105673K(139444K), 0.0116090 secs]
>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>> 1437.316: [CMS-concurrent-sweep-start]
>>> 1437.319: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1437.319: [CMS-concurrent-reset-start]
>>> 1437.328: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1439.328: [GC [1 CMS-initial-mark: 12849K(21428K)] 105801K(139444K),
>>> 0.0115740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1439.340: [CMS-concurrent-mark-start]
>>> 1439.356: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1439.356: [CMS-concurrent-preclean-start]
>>> 1439.356: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1439.356: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1444.411:
>>> [CMS-concurrent-abortable-preclean: 0.715/5.054 secs] [Times:
>>> user=0.72 sys=0.00, real=5.05 secs]
>>> 1444.411: [GC[YG occupancy: 93547 K (118016 K)]1444.411: [Rescan
>>> (parallel) , 0.0072910 secs]1444.418: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 106397K(139444K), 0.0073970 secs]
>>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>>> 1444.419: [CMS-concurrent-sweep-start]
>>> 1444.420: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1444.420: [CMS-concurrent-reset-start]
>>> 1444.429: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1446.429: [GC [1 CMS-initial-mark: 12849K(21428K)] 106525K(139444K),
>>> 0.0117950 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1446.441: [CMS-concurrent-mark-start]
>>> 1446.457: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1446.457: [CMS-concurrent-preclean-start]
>>> 1446.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1446.458: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1451.461:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1451.461: [GC[YG occupancy: 93996 K (118016 K)]1451.461: [Rescan
>>> (parallel) , 0.0120870 secs]1451.473: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 106846K(139444K), 0.0121920 secs]
>>> [Times: user=0.14 sys=0.00, real=0.02 secs]
>>> 1451.473: [CMS-concurrent-sweep-start]
>>> 1451.476: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1451.476: [CMS-concurrent-reset-start]
>>> 1451.485: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1453.485: [GC [1 CMS-initial-mark: 12849K(21428K)] 106974K(139444K),
>>> 0.0117990 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1453.497: [CMS-concurrent-mark-start]
>>> 1453.514: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1453.514: [CMS-concurrent-preclean-start]
>>> 1453.515: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1453.515: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1458.518:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1458.518: [GC[YG occupancy: 94445 K (118016 K)]1458.518: [Rescan
>>> (parallel) , 0.0123720 secs]1458.530: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 107295K(139444K), 0.0124750 secs]
>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>> 1458.530: [CMS-concurrent-sweep-start]
>>> 1458.532: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1458.532: [CMS-concurrent-reset-start]
>>> 1458.540: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1460.541: [GC [1 CMS-initial-mark: 12849K(21428K)] 107423K(139444K),
>>> 0.0118680 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1460.553: [CMS-concurrent-mark-start]
>>> 1460.568: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1460.568: [CMS-concurrent-preclean-start]
>>> 1460.569: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1460.569: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1465.577:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1465.577: [GC[YG occupancy: 94894 K (118016 K)]1465.577: [Rescan
>>> (parallel) , 0.0119100 secs]1465.589: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 107744K(139444K), 0.0120270 secs]
>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>> 1465.590: [CMS-concurrent-sweep-start]
>>> 1465.591: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1465.591: [CMS-concurrent-reset-start]
>>> 1465.600: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1467.600: [GC [1 CMS-initial-mark: 12849K(21428K)] 107937K(139444K),
>>> 0.0120020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1467.612: [CMS-concurrent-mark-start]
>>> 1467.628: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1467.628: [CMS-concurrent-preclean-start]
>>> 1467.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1467.628: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1472.636:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1472.637: [GC[YG occupancy: 95408 K (118016 K)]1472.637: [Rescan
>>> (parallel) , 0.0119090 secs]1472.649: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 108257K(139444K), 0.0120260 secs]
>>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>>> 1472.649: [CMS-concurrent-sweep-start]
>>> 1472.650: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1472.650: [CMS-concurrent-reset-start]
>>> 1472.659: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1472.775: [GC [1 CMS-initial-mark: 12849K(21428K)] 108365K(139444K),
>>> 0.0120260 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1472.787: [CMS-concurrent-mark-start]
>>> 1472.805: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1472.805: [CMS-concurrent-preclean-start]
>>> 1472.806: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.01 sys=0.00, real=0.00 secs]
>>> 1472.806: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1477.808:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 1477.808: [GC[YG occupancy: 95876 K (118016 K)]1477.808: [Rescan
>>> (parallel) , 0.0099490 secs]1477.818: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 108726K(139444K), 0.0100580 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1477.818: [CMS-concurrent-sweep-start]
>>> 1477.820: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1477.820: [CMS-concurrent-reset-start]
>>> 1477.828: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1479.829: [GC [1 CMS-initial-mark: 12849K(21428K)] 108854K(139444K),
>>> 0.0119550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1479.841: [CMS-concurrent-mark-start]
>>> 1479.857: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1479.857: [CMS-concurrent-preclean-start]
>>> 1479.857: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1479.857: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1484.870:
>>> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1484.870: [GC[YG occupancy: 96325 K (118016 K)]1484.870: [Rescan
>>> (parallel) , 0.0122870 secs]1484.882: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 109175K(139444K), 0.0123900 secs]
>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>> 1484.882: [CMS-concurrent-sweep-start]
>>> 1484.884: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1484.884: [CMS-concurrent-reset-start]
>>> 1484.893: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1486.893: [GC [1 CMS-initial-mark: 12849K(21428K)] 109304K(139444K),
>>> 0.0118470 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>> 1486.905: [CMS-concurrent-mark-start]
>>> 1486.921: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.01 secs]
>>> 1486.921: [CMS-concurrent-preclean-start]
>>> 1486.921: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1486.921: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1491.968:
>>> [CMS-concurrent-abortable-preclean: 0.720/5.047 secs] [Times:
>>> user=0.72 sys=0.00, real=5.05 secs]
>>> 1491.968: [GC[YG occupancy: 96774 K (118016 K)]1491.968: [Rescan
>>> (parallel) , 0.0122850 secs]1491.981: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 109624K(139444K), 0.0123880 secs]
>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>> 1491.981: [CMS-concurrent-sweep-start]
>>> 1491.982: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1491.982: [CMS-concurrent-reset-start]
>>> 1491.991: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1493.991: [GC [1 CMS-initial-mark: 12849K(21428K)] 109753K(139444K),
>>> 0.0119790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1494.004: [CMS-concurrent-mark-start]
>>> 1494.019: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1494.019: [CMS-concurrent-preclean-start]
>>> 1494.019: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1494.019: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1499.100:
>>> [CMS-concurrent-abortable-preclean: 0.722/5.080 secs] [Times:
>>> user=0.72 sys=0.00, real=5.08 secs]
>>> 1499.100: [GC[YG occupancy: 98295 K (118016 K)]1499.100: [Rescan
>>> (parallel) , 0.0123180 secs]1499.112: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 111145K(139444K), 0.0124240 secs]
>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>> 1499.113: [CMS-concurrent-sweep-start]
>>> 1499.114: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1499.114: [CMS-concurrent-reset-start]
>>> 1499.123: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1501.123: [GC [1 CMS-initial-mark: 12849K(21428K)] 111274K(139444K),
>>> 0.0117720 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
>>> 1501.135: [CMS-concurrent-mark-start]
>>> 1501.150: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 1501.150: [CMS-concurrent-preclean-start]
>>> 1501.151: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.01 sys=0.00, real=0.00 secs]
>>> 1501.151: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1506.172:
>>> [CMS-concurrent-abortable-preclean: 0.712/5.022 secs] [Times:
>>> user=0.71 sys=0.00, real=5.02 secs]
>>> 1506.172: [GC[YG occupancy: 98890 K (118016 K)]1506.173: [Rescan
>>> (parallel) , 0.0113790 secs]1506.184: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 111740K(139444K), 0.0114830 secs]
>>> [Times: user=0.13 sys=0.00, real=0.02 secs]
>>> 1506.184: [CMS-concurrent-sweep-start]
>>> 1506.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1506.186: [CMS-concurrent-reset-start]
>>> 1506.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1508.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 111868K(139444K),
>>> 0.0122930 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1508.208: [CMS-concurrent-mark-start]
>>> 1508.225: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1508.225: [CMS-concurrent-preclean-start]
>>> 1508.225: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1508.226: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1513.232:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1513.232: [GC[YG occupancy: 99339 K (118016 K)]1513.232: [Rescan
>>> (parallel) , 0.0123890 secs]1513.244: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 112189K(139444K), 0.0124930 secs]
>>> [Times: user=0.14 sys=0.00, real=0.02 secs]
>>> 1513.245: [CMS-concurrent-sweep-start]
>>> 1513.246: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1513.246: [CMS-concurrent-reset-start]
>>> 1513.255: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1515.256: [GC [1 CMS-initial-mark: 12849K(21428K)] 113182K(139444K),
>>> 0.0123210 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1515.268: [CMS-concurrent-mark-start]
>>> 1515.285: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1515.285: [CMS-concurrent-preclean-start]
>>> 1515.285: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1515.285: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1520.290:
>>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1520.290: [GC[YG occupancy: 100653 K (118016 K)]1520.290: [Rescan
>>> (parallel) , 0.0125490 secs]1520.303: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 113502K(139444K), 0.0126520 secs]
>>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>>> 1520.303: [CMS-concurrent-sweep-start]
>>> 1520.304: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1520.304: [CMS-concurrent-reset-start]
>>> 1520.313: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1522.314: [GC [1 CMS-initial-mark: 12849K(21428K)] 113631K(139444K),
>>> 0.0118790 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1522.326: [CMS-concurrent-mark-start]
>>> 1522.343: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.01 secs]
>>> 1522.343: [CMS-concurrent-preclean-start]
>>> 1522.343: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1522.343: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1527.350:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1527.350: [GC[YG occupancy: 101102 K (118016 K)]1527.350: [Rescan
>>> (parallel) , 0.0127460 secs]1527.363: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 113952K(139444K), 0.0128490 secs]
>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>> 1527.363: [CMS-concurrent-sweep-start]
>>> 1527.365: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1527.365: [CMS-concurrent-reset-start]
>>> 1527.374: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1529.374: [GC [1 CMS-initial-mark: 12849K(21428K)] 114080K(139444K),
>>> 0.0117550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1529.386: [CMS-concurrent-mark-start]
>>> 1529.403: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1529.404: [CMS-concurrent-preclean-start]
>>> 1529.404: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1529.404: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1534.454:
>>> [CMS-concurrent-abortable-preclean: 0.712/5.050 secs] [Times:
>>> user=0.70 sys=0.01, real=5.05 secs]
>>> 1534.454: [GC[YG occupancy: 101591 K (118016 K)]1534.454: [Rescan
>>> (parallel) , 0.0122680 secs]1534.466: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 114441K(139444K), 0.0123750 secs]
>>> [Times: user=0.12 sys=0.02, real=0.01 secs]
>>> 1534.466: [CMS-concurrent-sweep-start]
>>> 1534.468: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1534.468: [CMS-concurrent-reset-start]
>>> 1534.478: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1536.478: [GC [1 CMS-initial-mark: 12849K(21428K)] 114570K(139444K),
>>> 0.0125250 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1536.491: [CMS-concurrent-mark-start]
>>> 1536.507: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1536.507: [CMS-concurrent-preclean-start]
>>> 1536.507: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1536.507: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1541.516:
>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1541.516: [GC[YG occupancy: 102041 K (118016 K)]1541.516: [Rescan
>>> (parallel) , 0.0088270 secs]1541.525: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 114890K(139444K), 0.0089300 secs]
>>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>>> 1541.525: [CMS-concurrent-sweep-start]
>>> 1541.527: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1541.527: [CMS-concurrent-reset-start]
>>> 1541.537: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1543.537: [GC [1 CMS-initial-mark: 12849K(21428K)] 115019K(139444K),
>>> 0.0124500 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1543.550: [CMS-concurrent-mark-start]
>>> 1543.566: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1543.566: [CMS-concurrent-preclean-start]
>>> 1543.567: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1543.567: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1548.578:
>>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1548.578: [GC[YG occupancy: 102490 K (118016 K)]1548.578: [Rescan
>>> (parallel) , 0.0100430 secs]1548.588: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 115340K(139444K), 0.0101440 secs]
>>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>>> 1548.588: [CMS-concurrent-sweep-start]
>>> 1548.589: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1548.589: [CMS-concurrent-reset-start]
>>> 1548.598: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1550.598: [GC [1 CMS-initial-mark: 12849K(21428K)] 115468K(139444K),
>>> 0.0125070 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1550.611: [CMS-concurrent-mark-start]
>>> 1550.627: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1550.627: [CMS-concurrent-preclean-start]
>>> 1550.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1550.628: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1555.631:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 1555.631: [GC[YG occupancy: 103003 K (118016 K)]1555.631: [Rescan
>>> (parallel) , 0.0117610 secs]1555.643: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 115853K(139444K), 0.0118770 secs]
>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>> 1555.643: [CMS-concurrent-sweep-start]
>>> 1555.645: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1555.645: [CMS-concurrent-reset-start]
>>> 1555.655: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1557.655: [GC [1 CMS-initial-mark: 12849K(21428K)] 115981K(139444K),
>>> 0.0126720 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1557.668: [CMS-concurrent-mark-start]
>>> 1557.685: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1557.685: [CMS-concurrent-preclean-start]
>>> 1557.685: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1557.685: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1562.688:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1562.688: [GC[YG occupancy: 103557 K (118016 K)]1562.688: [Rescan
>>> (parallel) , 0.0121530 secs]1562.700: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 116407K(139444K), 0.0122560 secs]
>>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>>> 1562.700: [CMS-concurrent-sweep-start]
>>> 1562.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1562.701: [CMS-concurrent-reset-start]
>>> 1562.710: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1562.821: [GC [1 CMS-initial-mark: 12849K(21428K)] 116514K(139444K),
>>> 0.0127240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1562.834: [CMS-concurrent-mark-start]
>>> 1562.852: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.01 secs]
>>> 1562.852: [CMS-concurrent-preclean-start]
>>> 1562.853: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1562.853: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1567.859:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1567.859: [GC[YG occupancy: 104026 K (118016 K)]1567.859: [Rescan
>>> (parallel) , 0.0131290 secs]1567.872: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 116876K(139444K), 0.0132470 secs]
>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>> 1567.873: [CMS-concurrent-sweep-start]
>>> 1567.874: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1567.874: [CMS-concurrent-reset-start]
>>> 1567.883: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1569.883: [GC [1 CMS-initial-mark: 12849K(21428K)] 117103K(139444K),
>>> 0.0123770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>> 1569.896: [CMS-concurrent-mark-start]
>>> 1569.913: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.01 secs]
>>> 1569.913: [CMS-concurrent-preclean-start]
>>> 1569.913: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.01 secs]
>>> 1569.913: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1574.920:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1574.920: [GC[YG occupancy: 104510 K (118016 K)]1574.920: [Rescan
>>> (parallel) , 0.0122810 secs]1574.932: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 117360K(139444K), 0.0123870 secs]
>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>> 1574.933: [CMS-concurrent-sweep-start]
>>> 1574.935: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1574.935: [CMS-concurrent-reset-start]
>>> 1574.944: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1575.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 117360K(139444K),
>>> 0.0121590 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
>>> 1575.176: [CMS-concurrent-mark-start]
>>> 1575.193: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1575.193: [CMS-concurrent-preclean-start]
>>> 1575.193: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.01 secs]
>>> 1575.193: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1580.197:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.71 sys=0.00, real=5.00 secs]
>>> 1580.197: [GC[YG occupancy: 104831 K (118016 K)]1580.197: [Rescan
>>> (parallel) , 0.0129860 secs]1580.210: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 117681K(139444K), 0.0130980 secs]
>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>> 1580.210: [CMS-concurrent-sweep-start]
>>> 1580.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1580.211: [CMS-concurrent-reset-start]
>>> 1580.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1582.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 117809K(139444K),
>>> 0.0129700 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1582.234: [CMS-concurrent-mark-start]
>>> 1582.249: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>>> sys=0.01, real=0.02 secs]
>>> 1582.249: [CMS-concurrent-preclean-start]
>>> 1582.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1582.249: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1587.262:
>>> [CMS-concurrent-abortable-preclean: 0.707/5.013 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1587.262: [GC[YG occupancy: 105280 K (118016 K)]1587.262: [Rescan
>>> (parallel) , 0.0134570 secs]1587.276: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 118130K(139444K), 0.0135720 secs]
>>> [Times: user=0.15 sys=0.00, real=0.02 secs]
>>> 1587.276: [CMS-concurrent-sweep-start]
>>> 1587.278: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1587.278: [CMS-concurrent-reset-start]
>>> 1587.287: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1589.287: [GC [1 CMS-initial-mark: 12849K(21428K)] 118258K(139444K),
>>> 0.0130010 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1589.301: [CMS-concurrent-mark-start]
>>> 1589.316: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1589.316: [CMS-concurrent-preclean-start]
>>> 1589.316: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1589.316: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1594.364:
>>> [CMS-concurrent-abortable-preclean: 0.712/5.048 secs] [Times:
>>> user=0.71 sys=0.00, real=5.05 secs]
>>> 1594.365: [GC[YG occupancy: 105770 K (118016 K)]1594.365: [Rescan
>>> (parallel) , 0.0131190 secs]1594.378: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 118620K(139444K), 0.0132380 secs]
>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>> 1594.378: [CMS-concurrent-sweep-start]
>>> 1594.380: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1594.380: [CMS-concurrent-reset-start]
>>> 1594.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1596.390: [GC [1 CMS-initial-mark: 12849K(21428K)] 118748K(139444K),
>>> 0.0130650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1596.403: [CMS-concurrent-mark-start]
>>> 1596.418: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1596.418: [CMS-concurrent-preclean-start]
>>> 1596.419: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1596.419: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1601.422:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.69 sys=0.01, real=5.00 secs]
>>> 1601.422: [GC[YG occupancy: 106219 K (118016 K)]1601.422: [Rescan
>>> (parallel) , 0.0130310 secs]1601.435: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 119069K(139444K), 0.0131490 secs]
>>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>>> 1601.435: [CMS-concurrent-sweep-start]
>>> 1601.437: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1601.437: [CMS-concurrent-reset-start]
>>> 1601.446: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1603.447: [GC [1 CMS-initial-mark: 12849K(21428K)] 119197K(139444K),
>>> 0.0130220 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1603.460: [CMS-concurrent-mark-start]
>>> 1603.476: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1603.476: [CMS-concurrent-preclean-start]
>>> 1603.476: [CMS-concurrent-preclean: 0.000/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1603.476: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1608.478:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1608.478: [GC[YG occupancy: 106668 K (118016 K)]1608.479: [Rescan
>>> (parallel) , 0.0122680 secs]1608.491: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 119518K(139444K), 0.0123790 secs]
>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>> 1608.491: [CMS-concurrent-sweep-start]
>>> 1608.492: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1608.492: [CMS-concurrent-reset-start]
>>> 1608.501: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1610.502: [GC [1 CMS-initial-mark: 12849K(21428K)] 119646K(139444K),
>>> 0.0130770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>> 1610.515: [CMS-concurrent-mark-start]
>>> 1610.530: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1610.530: [CMS-concurrent-preclean-start]
>>> 1610.530: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1610.530: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1615.536:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1615.536: [GC[YG occupancy: 107117 K (118016 K)]1615.536: [Rescan
>>> (parallel) , 0.0125470 secs]1615.549: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 119967K(139444K), 0.0126510 secs]
>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>> 1615.549: [CMS-concurrent-sweep-start]
>>> 1615.551: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1615.551: [CMS-concurrent-reset-start]
>>> 1615.561: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1617.561: [GC [1 CMS-initial-mark: 12849K(21428K)] 120095K(139444K),
>>> 0.0129520 secs] [Times: user=0.00 sys=0.00, real=0.02 secs]
>>> 1617.574: [CMS-concurrent-mark-start]
>>> 1617.591: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.01 secs]
>>> 1617.591: [CMS-concurrent-preclean-start]
>>> 1617.591: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1617.591: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1622.598:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1622.598: [GC[YG occupancy: 107777 K (118016 K)]1622.599: [Rescan
>>> (parallel) , 0.0140340 secs]1622.613: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 120627K(139444K), 0.0141520 secs]
>>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>>> 1622.613: [CMS-concurrent-sweep-start]
>>> 1622.614: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1622.614: [CMS-concurrent-reset-start]
>>> 1622.623: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.02 secs]
>>> 1622.848: [GC [1 CMS-initial-mark: 12849K(21428K)] 120691K(139444K),
>>> 0.0133410 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1622.861: [CMS-concurrent-mark-start]
>>> 1622.878: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1622.878: [CMS-concurrent-preclean-start]
>>> 1622.879: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1622.879: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1627.941:
>>> [CMS-concurrent-abortable-preclean: 0.656/5.062 secs] [Times:
>>> user=0.65 sys=0.00, real=5.06 secs]
>>> 1627.941: [GC[YG occupancy: 108202 K (118016 K)]1627.941: [Rescan
>>> (parallel) , 0.0135120 secs]1627.955: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 121052K(139444K), 0.0136620 secs]
>>> [Times: user=0.15 sys=0.00, real=0.02 secs]
>>> 1627.955: [CMS-concurrent-sweep-start]
>>> 1627.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1627.956: [CMS-concurrent-reset-start]
>>> 1627.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1629.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 121180K(139444K),
>>> 0.0133770 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1629.979: [CMS-concurrent-mark-start]
>>> 1629.995: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1629.995: [CMS-concurrent-preclean-start]
>>> 1629.996: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1629.996: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1634.998:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.69 sys=0.00, real=5.00 secs]
>>> 1634.999: [GC[YG occupancy: 108651 K (118016 K)]1634.999: [Rescan
>>> (parallel) , 0.0134300 secs]1635.012: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 121501K(139444K), 0.0135530 secs]
>>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>>> 1635.012: [CMS-concurrent-sweep-start]
>>> 1635.014: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1635.014: [CMS-concurrent-reset-start]
>>> 1635.023: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1637.023: [GC [1 CMS-initial-mark: 12849K(21428K)] 121629K(139444K),
>>> 0.0127330 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>> 1637.036: [CMS-concurrent-mark-start]
>>> 1637.053: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1637.054: [CMS-concurrent-preclean-start]
>>> 1637.054: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1637.054: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1642.062:
>>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1642.062: [GC[YG occupancy: 109100 K (118016 K)]1642.062: [Rescan
>>> (parallel) , 0.0124310 secs]1642.075: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 121950K(139444K), 0.0125510 secs]
>>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>>> 1642.075: [CMS-concurrent-sweep-start]
>>> 1642.077: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1642.077: [CMS-concurrent-reset-start]
>>> 1642.086: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1644.087: [GC [1 CMS-initial-mark: 12849K(21428K)] 122079K(139444K),
>>> 0.0134300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1644.100: [CMS-concurrent-mark-start]
>>> 1644.116: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1644.116: [CMS-concurrent-preclean-start]
>>> 1644.116: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1644.116: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1649.125:
>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1649.126: [GC[YG occupancy: 109549 K (118016 K)]1649.126: [Rescan
>>> (parallel) , 0.0126870 secs]1649.138: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 122399K(139444K), 0.0128010 secs]
>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>> 1649.139: [CMS-concurrent-sweep-start]
>>> 1649.141: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1649.141: [CMS-concurrent-reset-start]
>>> 1649.150: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1651.150: [GC [1 CMS-initial-mark: 12849K(21428K)] 122528K(139444K),
>>> 0.0134790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1651.164: [CMS-concurrent-mark-start]
>>> 1651.179: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1651.179: [CMS-concurrent-preclean-start]
>>> 1651.179: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1651.179: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1656.254:
>>> [CMS-concurrent-abortable-preclean: 0.722/5.074 secs] [Times:
>>> user=0.71 sys=0.01, real=5.07 secs]
>>> 1656.254: [GC[YG occupancy: 110039 K (118016 K)]1656.254: [Rescan
>>> (parallel) , 0.0092110 secs]1656.263: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 122889K(139444K), 0.0093170 secs]
>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>> 1656.263: [CMS-concurrent-sweep-start]
>>> 1656.266: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1656.266: [CMS-concurrent-reset-start]
>>> 1656.275: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1658.275: [GC [1 CMS-initial-mark: 12849K(21428K)] 123017K(139444K),
>>> 0.0134150 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1658.289: [CMS-concurrent-mark-start]
>>> 1658.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1658.306: [CMS-concurrent-preclean-start]
>>> 1658.306: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1658.306: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1663.393:
>>> [CMS-concurrent-abortable-preclean: 0.711/5.087 secs] [Times:
>>> user=0.71 sys=0.00, real=5.08 secs]
>>> 1663.393: [GC[YG occupancy: 110488 K (118016 K)]1663.393: [Rescan
>>> (parallel) , 0.0132450 secs]1663.406: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 123338K(139444K), 0.0133600 secs]
>>> [Times: user=0.15 sys=0.00, real=0.02 secs]
>>> 1663.407: [CMS-concurrent-sweep-start]
>>> 1663.409: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1663.409: [CMS-concurrent-reset-start]
>>> 1663.418: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1665.418: [GC [1 CMS-initial-mark: 12849K(21428K)] 123467K(139444K),
>>> 0.0135570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1665.432: [CMS-concurrent-mark-start]
>>> 1665.447: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1665.447: [CMS-concurrent-preclean-start]
>>> 1665.448: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1665.448: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1670.457:
>>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1670.457: [GC[YG occupancy: 110937 K (118016 K)]1670.457: [Rescan
>>> (parallel) , 0.0142820 secs]1670.471: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 123787K(139444K), 0.0144010 secs]
>>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>>> 1670.472: [CMS-concurrent-sweep-start]
>>> 1670.473: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1670.473: [CMS-concurrent-reset-start]
>>> 1670.482: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1672.482: [GC [1 CMS-initial-mark: 12849K(21428K)] 123916K(139444K),
>>> 0.0136110 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>> 1672.496: [CMS-concurrent-mark-start]
>>> 1672.513: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1672.513: [CMS-concurrent-preclean-start]
>>> 1672.513: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1672.513: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1677.530:
>>> [CMS-concurrent-abortable-preclean: 0.711/5.017 secs] [Times:
>>> user=0.71 sys=0.00, real=5.02 secs]
>>> 1677.530: [GC[YG occupancy: 111387 K (118016 K)]1677.530: [Rescan
>>> (parallel) , 0.0129210 secs]1677.543: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 124236K(139444K), 0.0130360 secs]
>>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>>> 1677.543: [CMS-concurrent-sweep-start]
>>> 1677.545: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1677.545: [CMS-concurrent-reset-start]
>>> 1677.554: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1679.554: [GC [1 CMS-initial-mark: 12849K(21428K)] 124365K(139444K),
>>> 0.0125140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1679.567: [CMS-concurrent-mark-start]
>>> 1679.584: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1679.584: [CMS-concurrent-preclean-start]
>>> 1679.584: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1679.584: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1684.631:
>>> [CMS-concurrent-abortable-preclean: 0.714/5.047 secs] [Times:
>>> user=0.72 sys=0.00, real=5.04 secs]
>>> 1684.631: [GC[YG occupancy: 112005 K (118016 K)]1684.631: [Rescan
>>> (parallel) , 0.0146760 secs]1684.646: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 124855K(139444K), 0.0147930 secs]
>>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>>> 1684.646: [CMS-concurrent-sweep-start]
>>> 1684.648: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1684.648: [CMS-concurrent-reset-start]
>>> 1684.656: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1686.656: [GC [1 CMS-initial-mark: 12849K(21428K)] 125048K(139444K),
>>> 0.0138340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1686.670: [CMS-concurrent-mark-start]
>>> 1686.686: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1686.686: [CMS-concurrent-preclean-start]
>>> 1686.687: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1686.687: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1691.689:
>>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1691.689: [GC[YG occupancy: 112518 K (118016 K)]1691.689: [Rescan
>>> (parallel) , 0.0142600 secs]1691.703: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 12849K(21428K)] 125368K(139444K), 0.0143810 secs]
>>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>>> 1691.703: [CMS-concurrent-sweep-start]
>>> 1691.705: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1691.705: [CMS-concurrent-reset-start]
>>> 1691.714: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1693.714: [GC [1 CMS-initial-mark: 12849K(21428K)] 125497K(139444K),
>>> 0.0126710 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1693.727: [CMS-concurrent-mark-start]
>>> 1693.744: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1693.744: [CMS-concurrent-preclean-start]
>>> 1693.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1693.745: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1698.747:
>>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1698.748: [GC[YG occupancy: 112968 K (118016 K)]1698.748: [Rescan
>>> (parallel) , 0.0147370 secs]1698.762: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 125818K(139444K), 0.0148490 secs]
>>> [Times: user=0.17 sys=0.00, real=0.01 secs]
>>> 1698.763: [CMS-concurrent-sweep-start]
>>> 1698.764: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1698.764: [CMS-concurrent-reset-start]
>>> 1698.773: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1700.773: [GC [1 CMS-initial-mark: 12849K(21428K)] 125946K(139444K),
>>> 0.0128810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1700.786: [CMS-concurrent-mark-start]
>>> 1700.804: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1700.804: [CMS-concurrent-preclean-start]
>>> 1700.804: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1700.804: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1705.810:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1705.810: [GC[YG occupancy: 113417 K (118016 K)]1705.810: [Rescan
>>> (parallel) , 0.0146750 secs]1705.825: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 126267K(139444K), 0.0147760 secs]
>>> [Times: user=0.17 sys=0.00, real=0.02 secs]
>>> 1705.825: [CMS-concurrent-sweep-start]
>>> 1705.827: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1705.827: [CMS-concurrent-reset-start]
>>> 1705.836: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1707.836: [GC [1 CMS-initial-mark: 12849K(21428K)] 126395K(139444K),
>>> 0.0137570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1707.850: [CMS-concurrent-mark-start]
>>> 1707.866: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1707.866: [CMS-concurrent-preclean-start]
>>> 1707.867: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1707.867: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1712.878:
>>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1712.878: [GC[YG occupancy: 113866 K (118016 K)]1712.878: [Rescan
>>> (parallel) , 0.0116340 secs]1712.890: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 126716K(139444K), 0.0117350 secs]
>>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>>> 1712.890: [CMS-concurrent-sweep-start]
>>> 1712.893: [CMS-concurrent-sweep: 0.002/0.003 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1712.893: [CMS-concurrent-reset-start]
>>> 1712.902: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1714.902: [GC [1 CMS-initial-mark: 12849K(21428K)] 126984K(139444K),
>>> 0.0134590 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>> 1714.915: [CMS-concurrent-mark-start]
>>> 1714.933: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1714.933: [CMS-concurrent-preclean-start]
>>> 1714.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1714.934: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1719.940:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.71 sys=0.00, real=5.00 secs]
>>> 1719.940: [GC[YG occupancy: 114552 K (118016 K)]1719.940: [Rescan
>>> (parallel) , 0.0141320 secs]1719.955: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 127402K(139444K), 0.0142280 secs]
>>> [Times: user=0.16 sys=0.01, real=0.02 secs]
>>> 1719.955: [CMS-concurrent-sweep-start]
>>> 1719.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1719.956: [CMS-concurrent-reset-start]
>>> 1719.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1721.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 127530K(139444K),
>>> 0.0139120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1721.980: [CMS-concurrent-mark-start]
>>> 1721.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1721.996: [CMS-concurrent-preclean-start]
>>> 1721.997: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1721.997: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1727.010:
>>> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
>>> user=0.71 sys=0.00, real=5.01 secs]
>>> 1727.010: [GC[YG occupancy: 115000 K (118016 K)]1727.010: [Rescan
>>> (parallel) , 0.0123190 secs]1727.023: [weak refs processing, 0.0000130
>>> secs] [1 CMS-remark: 12849K(21428K)] 127850K(139444K), 0.0124420 secs]
>>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>>> 1727.023: [CMS-concurrent-sweep-start]
>>> 1727.024: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1727.024: [CMS-concurrent-reset-start]
>>> 1727.033: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1729.034: [GC [1 CMS-initial-mark: 12849K(21428K)] 127978K(139444K),
>>> 0.0129330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1729.047: [CMS-concurrent-mark-start]
>>> 1729.064: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1729.064: [CMS-concurrent-preclean-start]
>>> 1729.064: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1729.064: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1734.075:
>>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1734.075: [GC[YG occupancy: 115449 K (118016 K)]1734.075: [Rescan
>>> (parallel) , 0.0131600 secs]1734.088: [weak refs processing, 0.0000130
>>> secs] [1 CMS-remark: 12849K(21428K)] 128298K(139444K), 0.0132810 secs]
>>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>>> 1734.089: [CMS-concurrent-sweep-start]
>>> 1734.091: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1734.091: [CMS-concurrent-reset-start]
>>> 1734.100: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1736.100: [GC [1 CMS-initial-mark: 12849K(21428K)] 128427K(139444K),
>>> 0.0141000 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>>> 1736.115: [CMS-concurrent-mark-start]
>>> 1736.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1736.131: [CMS-concurrent-preclean-start]
>>> 1736.131: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1736.131: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1741.139:
>>> [CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
>>> user=0.70 sys=0.00, real=5.01 secs]
>>> 1741.139: [GC[YG occupancy: 115897 K (118016 K)]1741.139: [Rescan
>>> (parallel) , 0.0146880 secs]1741.154: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 12849K(21428K)] 128747K(139444K), 0.0148020 secs]
>>> [Times: user=0.17 sys=0.00, real=0.02 secs]
>>> 1741.154: [CMS-concurrent-sweep-start]
>>> 1741.156: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1741.156: [CMS-concurrent-reset-start]
>>> 1741.165: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1742.898: [GC [1 CMS-initial-mark: 12849K(21428K)] 129085K(139444K),
>>> 0.0144050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1742.913: [CMS-concurrent-mark-start]
>>> 1742.931: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1742.931: [CMS-concurrent-preclean-start]
>>> 1742.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1742.932: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1748.016:
>>> [CMS-concurrent-abortable-preclean: 0.728/5.084 secs] [Times:
>>> user=0.73 sys=0.00, real=5.09 secs]
>>> 1748.016: [GC[YG occupancy: 116596 K (118016 K)]1748.016: [Rescan
>>> (parallel) , 0.0149950 secs]1748.031: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 129446K(139444K), 0.0150970 secs]
>>> [Times: user=0.17 sys=0.00, real=0.01 secs]
>>> 1748.031: [CMS-concurrent-sweep-start]
>>> 1748.033: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1748.033: [CMS-concurrent-reset-start]
>>> 1748.041: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1750.042: [GC [1 CMS-initial-mark: 12849K(21428K)] 129574K(139444K),
>>> 0.0141840 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1750.056: [CMS-concurrent-mark-start]
>>> 1750.073: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1750.073: [CMS-concurrent-preclean-start]
>>> 1750.074: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1750.074: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1755.080:
>>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>>> user=0.70 sys=0.00, real=5.00 secs]
>>> 1755.080: [GC[YG occupancy: 117044 K (118016 K)]1755.080: [Rescan
>>> (parallel) , 0.0155560 secs]1755.096: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 12849K(21428K)] 129894K(139444K), 0.0156580 secs]
>>> [Times: user=0.17 sys=0.00, real=0.02 secs]
>>> 1755.096: [CMS-concurrent-sweep-start]
>>> 1755.097: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1755.097: [CMS-concurrent-reset-start]
>>> 1755.105: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1756.660: [GC 1756.660: [ParNew: 117108K->482K(118016K), 0.0081410
>>> secs] 129958K->24535K(144568K), 0.0083030 secs] [Times: user=0.05
>>> sys=0.01, real=0.01 secs]
>>> 1756.668: [GC [1 CMS-initial-mark: 24053K(26552K)] 24599K(144568K),
>>> 0.0015280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1756.670: [CMS-concurrent-mark-start]
>>> 1756.688: [CMS-concurrent-mark: 0.016/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1756.688: [CMS-concurrent-preclean-start]
>>> 1756.689: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1756.689: [GC[YG occupancy: 546 K (118016 K)]1756.689: [Rescan
>>> (parallel) , 0.0018170 secs]1756.691: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(26552K)] 24599K(144568K), 0.0019050 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1756.691: [CMS-concurrent-sweep-start]
>>> 1756.694: [CMS-concurrent-sweep: 0.004/0.004 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1756.694: [CMS-concurrent-reset-start]
>>> 1756.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1758.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 25372K(158108K),
>>> 0.0014030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1758.705: [CMS-concurrent-mark-start]
>>> 1758.720: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>>> sys=0.00, real=0.01 secs]
>>> 1758.720: [CMS-concurrent-preclean-start]
>>> 1758.720: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.01 sys=0.00, real=0.00 secs]
>>> 1758.721: [GC[YG occupancy: 1319 K (118016 K)]1758.721: [Rescan
>>> (parallel) , 0.0014940 secs]1758.722: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 25372K(158108K), 0.0015850 secs]
>>> [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1758.722: [CMS-concurrent-sweep-start]
>>> 1758.726: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1758.726: [CMS-concurrent-reset-start]
>>> 1758.735: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1760.735: [GC [1 CMS-initial-mark: 24053K(40092K)] 25565K(158108K),
>>> 0.0014530 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1760.737: [CMS-concurrent-mark-start]
>>> 1760.755: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1760.755: [CMS-concurrent-preclean-start]
>>> 1760.755: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1760.756: [GC[YG occupancy: 1512 K (118016 K)]1760.756: [Rescan
>>> (parallel) , 0.0014970 secs]1760.757: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 25565K(158108K), 0.0015980 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1760.757: [CMS-concurrent-sweep-start]
>>> 1760.761: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1760.761: [CMS-concurrent-reset-start]
>>> 1760.770: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1762.770: [GC [1 CMS-initial-mark: 24053K(40092K)] 25693K(158108K),
>>> 0.0013680 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1762.772: [CMS-concurrent-mark-start]
>>> 1762.788: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1762.788: [CMS-concurrent-preclean-start]
>>> 1762.788: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1762.788: [GC[YG occupancy: 1640 K (118016 K)]1762.789: [Rescan
>>> (parallel) , 0.0020360 secs]1762.791: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 25693K(158108K), 0.0021450 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1762.791: [CMS-concurrent-sweep-start]
>>> 1762.794: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1762.794: [CMS-concurrent-reset-start]
>>> 1762.803: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1764.804: [GC [1 CMS-initial-mark: 24053K(40092K)] 26747K(158108K),
>>> 0.0014620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1764.805: [CMS-concurrent-mark-start]
>>> 1764.819: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1764.819: [CMS-concurrent-preclean-start]
>>> 1764.820: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1764.820: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1769.835:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.02 secs]
>>> 1769.835: [GC[YG occupancy: 3015 K (118016 K)]1769.835: [Rescan
>>> (parallel) , 0.0010360 secs]1769.836: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 27068K(158108K), 0.0011310 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1769.837: [CMS-concurrent-sweep-start]
>>> 1769.840: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1769.840: [CMS-concurrent-reset-start]
>>> 1769.849: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1771.850: [GC [1 CMS-initial-mark: 24053K(40092K)] 27196K(158108K),
>>> 0.0014740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1771.851: [CMS-concurrent-mark-start]
>>> 1771.868: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1771.868: [CMS-concurrent-preclean-start]
>>> 1771.868: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1771.868: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1776.913:
>>> [CMS-concurrent-abortable-preclean: 0.112/5.044 secs] [Times:
>>> user=0.12 sys=0.00, real=5.04 secs]
>>> 1776.913: [GC[YG occupancy: 4052 K (118016 K)]1776.913: [Rescan
>>> (parallel) , 0.0017790 secs]1776.915: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 24053K(40092K)] 28105K(158108K), 0.0018790 secs]
>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1776.915: [CMS-concurrent-sweep-start]
>>> 1776.918: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1776.918: [CMS-concurrent-reset-start]
>>> 1776.927: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1778.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 28233K(158108K),
>>> 0.0015470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1778.929: [CMS-concurrent-mark-start]
>>> 1778.947: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1778.947: [CMS-concurrent-preclean-start]
>>> 1778.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1778.947: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1783.963:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 1783.963: [GC[YG occupancy: 4505 K (118016 K)]1783.963: [Rescan
>>> (parallel) , 0.0014480 secs]1783.965: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 28558K(158108K), 0.0015470 secs]
>>> [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1783.965: [CMS-concurrent-sweep-start]
>>> 1783.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1783.968: [CMS-concurrent-reset-start]
>>> 1783.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1785.978: [GC [1 CMS-initial-mark: 24053K(40092K)] 28686K(158108K),
>>> 0.0015760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1785.979: [CMS-concurrent-mark-start]
>>> 1785.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1785.996: [CMS-concurrent-preclean-start]
>>> 1785.996: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1785.996: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1791.009:
>>> [CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 1791.010: [GC[YG occupancy: 4954 K (118016 K)]1791.010: [Rescan
>>> (parallel) , 0.0020030 secs]1791.012: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 24053K(40092K)] 29007K(158108K), 0.0021040 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1791.012: [CMS-concurrent-sweep-start]
>>> 1791.015: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1791.015: [CMS-concurrent-reset-start]
>>> 1791.023: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1793.023: [GC [1 CMS-initial-mark: 24053K(40092K)] 29136K(158108K),
>>> 0.0017200 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1793.025: [CMS-concurrent-mark-start]
>>> 1793.044: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.08
>>> sys=0.00, real=0.02 secs]
>>> 1793.044: [CMS-concurrent-preclean-start]
>>> 1793.045: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1793.045: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1798.137:
>>> [CMS-concurrent-abortable-preclean: 0.112/5.093 secs] [Times:
>>> user=0.11 sys=0.00, real=5.09 secs]
>>> 1798.137: [GC[YG occupancy: 6539 K (118016 K)]1798.137: [Rescan
>>> (parallel) , 0.0016650 secs]1798.139: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 24053K(40092K)] 30592K(158108K), 0.0017600 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1798.139: [CMS-concurrent-sweep-start]
>>> 1798.143: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1798.143: [CMS-concurrent-reset-start]
>>> 1798.152: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1800.152: [GC [1 CMS-initial-mark: 24053K(40092K)] 30721K(158108K),
>>> 0.0016650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1800.154: [CMS-concurrent-mark-start]
>>> 1800.170: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.01 secs]
>>> 1800.170: [CMS-concurrent-preclean-start]
>>> 1800.171: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1800.171: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1805.181:
>>> [CMS-concurrent-abortable-preclean: 0.110/5.010 secs] [Times:
>>> user=0.12 sys=0.00, real=5.01 secs]
>>> 1805.181: [GC[YG occupancy: 8090 K (118016 K)]1805.181: [Rescan
>>> (parallel) , 0.0018850 secs]1805.183: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 24053K(40092K)] 32143K(158108K), 0.0019860 secs]
>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1805.183: [CMS-concurrent-sweep-start]
>>> 1805.187: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1805.187: [CMS-concurrent-reset-start]
>>> 1805.196: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1807.196: [GC [1 CMS-initial-mark: 24053K(40092K)] 32272K(158108K),
>>> 0.0018760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1807.198: [CMS-concurrent-mark-start]
>>> 1807.216: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1807.216: [CMS-concurrent-preclean-start]
>>> 1807.216: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1807.216: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1812.232:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 1812.232: [GC[YG occupancy: 8543 K (118016 K)]1812.232: [Rescan
>>> (parallel) , 0.0020890 secs]1812.234: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 32596K(158108K), 0.0021910 secs]
>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1812.234: [CMS-concurrent-sweep-start]
>>> 1812.238: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1812.238: [CMS-concurrent-reset-start]
>>> 1812.247: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1812.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 32661K(158108K),
>>> 0.0019710 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1812.930: [CMS-concurrent-mark-start]
>>> 1812.947: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1812.947: [CMS-concurrent-preclean-start]
>>> 1812.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1812.948: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1817.963:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 1817.963: [GC[YG occupancy: 8928 K (118016 K)]1817.963: [Rescan
>>> (parallel) , 0.0011790 secs]1817.964: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 32981K(158108K), 0.0012750 secs]
>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1817.964: [CMS-concurrent-sweep-start]
>>> 1817.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1817.968: [CMS-concurrent-reset-start]
>>> 1817.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1819.977: [GC [1 CMS-initial-mark: 24053K(40092K)] 33110K(158108K),
>>> 0.0018900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1819.979: [CMS-concurrent-mark-start]
>>> 1819.996: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1819.997: [CMS-concurrent-preclean-start]
>>> 1819.997: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1819.997: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1825.012:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 1825.013: [GC[YG occupancy: 9377 K (118016 K)]1825.013: [Rescan
>>> (parallel) , 0.0020580 secs]1825.015: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 33431K(158108K), 0.0021510 secs]
>>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 1825.015: [CMS-concurrent-sweep-start]
>>> 1825.018: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1825.018: [CMS-concurrent-reset-start]
>>> 1825.027: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1827.028: [GC [1 CMS-initial-mark: 24053K(40092K)] 33559K(158108K),
>>> 0.0019140 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1827.030: [CMS-concurrent-mark-start]
>>> 1827.047: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1827.047: [CMS-concurrent-preclean-start]
>>> 1827.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1827.047: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1832.066:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.018 secs] [Times:
>>> user=0.12 sys=0.00, real=5.02 secs]
>>> 1832.066: [GC[YG occupancy: 9827 K (118016 K)]1832.066: [Rescan
>>> (parallel) , 0.0019440 secs]1832.068: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 33880K(158108K), 0.0020410 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1832.068: [CMS-concurrent-sweep-start]
>>> 1832.071: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1832.071: [CMS-concurrent-reset-start]
>>> 1832.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1832.935: [GC [1 CMS-initial-mark: 24053K(40092K)] 34093K(158108K),
>>> 0.0019830 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1832.937: [CMS-concurrent-mark-start]
>>> 1832.954: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1832.954: [CMS-concurrent-preclean-start]
>>> 1832.955: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1832.955: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1837.970:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 1837.970: [GC[YG occupancy: 10349 K (118016 K)]1837.970: [Rescan
>>> (parallel) , 0.0019670 secs]1837.972: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 24053K(40092K)] 34402K(158108K), 0.0020800 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1837.972: [CMS-concurrent-sweep-start]
>>> 1837.976: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1837.976: [CMS-concurrent-reset-start]
>>> 1837.985: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1839.985: [GC [1 CMS-initial-mark: 24053K(40092K)] 34531K(158108K),
>>> 0.0020220 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1839.987: [CMS-concurrent-mark-start]
>>> 1840.005: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.06
>>> sys=0.01, real=0.02 secs]
>>> 1840.005: [CMS-concurrent-preclean-start]
>>> 1840.006: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1840.006: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1845.018:
>>> [CMS-concurrent-abortable-preclean: 0.106/5.012 secs] [Times:
>>> user=0.10 sys=0.01, real=5.01 secs]
>>> 1845.018: [GC[YG occupancy: 10798 K (118016 K)]1845.018: [Rescan
>>> (parallel) , 0.0015500 secs]1845.019: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 34851K(158108K), 0.0016500 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1845.020: [CMS-concurrent-sweep-start]
>>> 1845.023: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1845.023: [CMS-concurrent-reset-start]
>>> 1845.032: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1847.032: [GC [1 CMS-initial-mark: 24053K(40092K)] 34980K(158108K),
>>> 0.0020600 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1847.035: [CMS-concurrent-mark-start]
>>> 1847.051: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.01 secs]
>>> 1847.051: [CMS-concurrent-preclean-start]
>>> 1847.052: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1847.052: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1852.067:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.02 secs]
>>> 1852.067: [GC[YG occupancy: 11247 K (118016 K)]1852.067: [Rescan
>>> (parallel) , 0.0011880 secs]1852.069: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 35300K(158108K), 0.0012900 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1852.069: [CMS-concurrent-sweep-start]
>>> 1852.072: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1852.072: [CMS-concurrent-reset-start]
>>> 1852.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1854.082: [GC [1 CMS-initial-mark: 24053K(40092K)] 35429K(158108K),
>>> 0.0021010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1854.084: [CMS-concurrent-mark-start]
>>> 1854.100: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1854.100: [CMS-concurrent-preclean-start]
>>> 1854.101: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1854.101: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1859.116:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.02 secs]
>>> 1859.116: [GC[YG occupancy: 11701 K (118016 K)]1859.117: [Rescan
>>> (parallel) , 0.0010230 secs]1859.118: [weak refs processing, 0.0000130
>>> secs] [1 CMS-remark: 24053K(40092K)] 35754K(158108K), 0.0011230 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1859.118: [CMS-concurrent-sweep-start]
>>> 1859.121: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1859.121: [CMS-concurrent-reset-start]
>>> 1859.130: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1861.131: [GC [1 CMS-initial-mark: 24053K(40092K)] 35882K(158108K),
>>> 0.0021240 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1861.133: [CMS-concurrent-mark-start]
>>> 1861.149: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1861.149: [CMS-concurrent-preclean-start]
>>> 1861.150: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1861.150: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1866.220:
>>> [CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
>>> user=0.12 sys=0.00, real=5.07 secs]
>>> 1866.220: [GC[YG occupancy: 12388 K (118016 K)]1866.220: [Rescan
>>> (parallel) , 0.0027090 secs]1866.223: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 36441K(158108K), 0.0028070 secs]
>>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>>> 1866.223: [CMS-concurrent-sweep-start]
>>> 1866.227: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1866.227: [CMS-concurrent-reset-start]
>>> 1866.236: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1868.236: [GC [1 CMS-initial-mark: 24053K(40092K)] 36569K(158108K),
>>> 0.0023650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1868.239: [CMS-concurrent-mark-start]
>>> 1868.256: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1868.256: [CMS-concurrent-preclean-start]
>>> 1868.257: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1868.257: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1873.267:
>>> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
>>> user=0.13 sys=0.00, real=5.01 secs]
>>> 1873.268: [GC[YG occupancy: 12837 K (118016 K)]1873.268: [Rescan
>>> (parallel) , 0.0018720 secs]1873.270: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 36890K(158108K), 0.0019730 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1873.270: [CMS-concurrent-sweep-start]
>>> 1873.273: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1873.273: [CMS-concurrent-reset-start]
>>> 1873.282: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1875.283: [GC [1 CMS-initial-mark: 24053K(40092K)] 37018K(158108K),
>>> 0.0024410 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1875.285: [CMS-concurrent-mark-start]
>>> 1875.302: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1875.302: [CMS-concurrent-preclean-start]
>>> 1875.302: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1875.303: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1880.318:
>>> [CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.02 secs]
>>> 1880.318: [GC[YG occupancy: 13286 K (118016 K)]1880.318: [Rescan
>>> (parallel) , 0.0023860 secs]1880.321: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 24053K(40092K)] 37339K(158108K), 0.0024910 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1880.321: [CMS-concurrent-sweep-start]
>>> 1880.324: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1880.324: [CMS-concurrent-reset-start]
>>> 1880.333: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 1882.334: [GC [1 CMS-initial-mark: 24053K(40092K)] 37467K(158108K),
>>> 0.0024090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1882.336: [CMS-concurrent-mark-start]
>>> 1882.352: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1882.352: [CMS-concurrent-preclean-start]
>>> 1882.353: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1882.353: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1887.368:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.02 secs]
>>> 1887.368: [GC[YG occupancy: 13739 K (118016 K)]1887.368: [Rescan
>>> (parallel) , 0.0022370 secs]1887.370: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 37792K(158108K), 0.0023360 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1887.371: [CMS-concurrent-sweep-start]
>>> 1887.374: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1887.374: [CMS-concurrent-reset-start]
>>> 1887.383: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1889.384: [GC [1 CMS-initial-mark: 24053K(40092K)] 37920K(158108K),
>>> 0.0024690 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1889.386: [CMS-concurrent-mark-start]
>>> 1889.404: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1889.404: [CMS-concurrent-preclean-start]
>>> 1889.405: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.01 sys=0.00, real=0.00 secs]
>>> 1889.405: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1894.488:
>>> [CMS-concurrent-abortable-preclean: 0.112/5.083 secs] [Times:
>>> user=0.11 sys=0.00, real=5.08 secs]
>>> 1894.488: [GC[YG occupancy: 14241 K (118016 K)]1894.488: [Rescan
>>> (parallel) , 0.0020670 secs]1894.490: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 38294K(158108K), 0.0021630 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1894.490: [CMS-concurrent-sweep-start]
>>> 1894.494: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1894.494: [CMS-concurrent-reset-start]
>>> 1894.503: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1896.503: [GC [1 CMS-initial-mark: 24053K(40092K)] 38422K(158108K),
>>> 0.0025430 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1896.506: [CMS-concurrent-mark-start]
>>> 1896.524: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1896.524: [CMS-concurrent-preclean-start]
>>> 1896.525: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1896.525: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1901.540:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 1901.540: [GC[YG occupancy: 14690 K (118016 K)]1901.540: [Rescan
>>> (parallel) , 0.0014810 secs]1901.542: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 38743K(158108K), 0.0015820 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1901.542: [CMS-concurrent-sweep-start]
>>> 1901.545: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1901.545: [CMS-concurrent-reset-start]
>>> 1901.555: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1903.555: [GC [1 CMS-initial-mark: 24053K(40092K)] 38871K(158108K),
>>> 0.0025990 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1903.558: [CMS-concurrent-mark-start]
>>> 1903.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1903.575: [CMS-concurrent-preclean-start]
>>> 1903.576: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1903.576: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1908.586:
>>> [CMS-concurrent-abortable-preclean: 0.105/5.010 secs] [Times:
>>> user=0.10 sys=0.00, real=5.01 secs]
>>> 1908.587: [GC[YG occupancy: 15207 K (118016 K)]1908.587: [Rescan
>>> (parallel) , 0.0026240 secs]1908.589: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 39260K(158108K), 0.0027260 secs]
>>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>>> 1908.589: [CMS-concurrent-sweep-start]
>>> 1908.593: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1908.593: [CMS-concurrent-reset-start]
>>> 1908.602: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1910.602: [GC [1 CMS-initial-mark: 24053K(40092K)] 39324K(158108K),
>>> 0.0025610 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1910.605: [CMS-concurrent-mark-start]
>>> 1910.621: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1910.621: [CMS-concurrent-preclean-start]
>>> 1910.622: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.01 sys=0.00, real=0.00 secs]
>>> 1910.622: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1915.684:
>>> [CMS-concurrent-abortable-preclean: 0.112/5.062 secs] [Times:
>>> user=0.11 sys=0.00, real=5.07 secs]
>>> 1915.684: [GC[YG occupancy: 15592 K (118016 K)]1915.684: [Rescan
>>> (parallel) , 0.0023940 secs]1915.687: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 24053K(40092K)] 39645K(158108K), 0.0025050 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1915.687: [CMS-concurrent-sweep-start]
>>> 1915.690: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1915.690: [CMS-concurrent-reset-start]
>>> 1915.699: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1917.700: [GC [1 CMS-initial-mark: 24053K(40092K)] 39838K(158108K),
>>> 0.0025010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1917.702: [CMS-concurrent-mark-start]
>>> 1917.719: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1917.719: [CMS-concurrent-preclean-start]
>>> 1917.719: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1917.719: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1922.735:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.01, real=5.02 secs]
>>> 1922.735: [GC[YG occupancy: 16198 K (118016 K)]1922.735: [Rescan
>>> (parallel) , 0.0028750 secs]1922.738: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 40251K(158108K), 0.0029760 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1922.738: [CMS-concurrent-sweep-start]
>>> 1922.741: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1922.741: [CMS-concurrent-reset-start]
>>> 1922.751: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1922.957: [GC [1 CMS-initial-mark: 24053K(40092K)] 40324K(158108K),
>>> 0.0027380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1922.960: [CMS-concurrent-mark-start]
>>> 1922.978: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1922.978: [CMS-concurrent-preclean-start]
>>> 1922.979: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1922.979: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1927.994:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.02 secs]
>>> 1927.995: [GC[YG occupancy: 16645 K (118016 K)]1927.995: [Rescan
>>> (parallel) , 0.0013210 secs]1927.996: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 40698K(158108K), 0.0017610 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1927.996: [CMS-concurrent-sweep-start]
>>> 1928.000: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1928.000: [CMS-concurrent-reset-start]
>>> 1928.009: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1930.009: [GC [1 CMS-initial-mark: 24053K(40092K)] 40826K(158108K),
>>> 0.0028310 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1930.012: [CMS-concurrent-mark-start]
>>> 1930.028: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1930.028: [CMS-concurrent-preclean-start]
>>> 1930.029: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1930.029: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1935.044:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 1935.045: [GC[YG occupancy: 17098 K (118016 K)]1935.045: [Rescan
>>> (parallel) , 0.0015440 secs]1935.046: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 24053K(40092K)] 41151K(158108K), 0.0016490 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1935.046: [CMS-concurrent-sweep-start]
>>> 1935.050: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1935.050: [CMS-concurrent-reset-start]
>>> 1935.059: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1937.059: [GC [1 CMS-initial-mark: 24053K(40092K)] 41279K(158108K),
>>> 0.0028290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1937.062: [CMS-concurrent-mark-start]
>>> 1937.079: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1937.079: [CMS-concurrent-preclean-start]
>>> 1937.079: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1937.079: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1942.095:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.01, real=5.02 secs]
>>> 1942.095: [GC[YG occupancy: 17547 K (118016 K)]1942.095: [Rescan
>>> (parallel) , 0.0030270 secs]1942.098: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 41600K(158108K), 0.0031250 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1942.098: [CMS-concurrent-sweep-start]
>>> 1942.101: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1942.101: [CMS-concurrent-reset-start]
>>> 1942.111: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1944.111: [GC [1 CMS-initial-mark: 24053K(40092K)] 41728K(158108K),
>>> 0.0028080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1944.114: [CMS-concurrent-mark-start]
>>> 1944.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1944.130: [CMS-concurrent-preclean-start]
>>> 1944.131: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1944.131: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1949.146:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.02 secs]
>>> 1949.146: [GC[YG occupancy: 17996 K (118016 K)]1949.146: [Rescan
>>> (parallel) , 0.0028800 secs]1949.149: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 24053K(40092K)] 42049K(158108K), 0.0029810 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1949.149: [CMS-concurrent-sweep-start]
>>> 1949.152: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1949.152: [CMS-concurrent-reset-start]
>>> 1949.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1951.162: [GC [1 CMS-initial-mark: 24053K(40092K)] 42177K(158108K),
>>> 0.0028760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1951.165: [CMS-concurrent-mark-start]
>>> 1951.184: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1951.184: [CMS-concurrent-preclean-start]
>>> 1951.184: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1951.184: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1956.244:
>>> [CMS-concurrent-abortable-preclean: 0.112/5.059 secs] [Times:
>>> user=0.11 sys=0.01, real=5.05 secs]
>>> 1956.244: [GC[YG occupancy: 18498 K (118016 K)]1956.244: [Rescan
>>> (parallel) , 0.0019760 secs]1956.246: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 42551K(158108K), 0.0020750 secs]
>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>> 1956.246: [CMS-concurrent-sweep-start]
>>> 1956.249: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1956.249: [CMS-concurrent-reset-start]
>>> 1956.259: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1958.259: [GC [1 CMS-initial-mark: 24053K(40092K)] 42747K(158108K),
>>> 0.0029160 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1958.262: [CMS-concurrent-mark-start]
>>> 1958.279: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1958.279: [CMS-concurrent-preclean-start]
>>> 1958.279: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1958.279: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1963.295:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 1963.295: [GC[YG occupancy: 18951 K (118016 K)]1963.295: [Rescan
>>> (parallel) , 0.0020140 secs]1963.297: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 43004K(158108K), 0.0021100 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1963.297: [CMS-concurrent-sweep-start]
>>> 1963.300: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1963.300: [CMS-concurrent-reset-start]
>>> 1963.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1965.310: [GC [1 CMS-initial-mark: 24053K(40092K)] 43132K(158108K),
>>> 0.0029420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1965.313: [CMS-concurrent-mark-start]
>>> 1965.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1965.329: [CMS-concurrent-preclean-start]
>>> 1965.330: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1965.330: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1970.345:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.02 secs]
>>> 1970.345: [GC[YG occupancy: 19400 K (118016 K)]1970.345: [Rescan
>>> (parallel) , 0.0031610 secs]1970.349: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 43453K(158108K), 0.0032580 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1970.349: [CMS-concurrent-sweep-start]
>>> 1970.352: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1970.352: [CMS-concurrent-reset-start]
>>> 1970.361: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1972.362: [GC [1 CMS-initial-mark: 24053K(40092K)] 43581K(158108K),
>>> 0.0029960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1972.365: [CMS-concurrent-mark-start]
>>> 1972.381: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 1972.381: [CMS-concurrent-preclean-start]
>>> 1972.382: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1972.382: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1977.397:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.02 secs]
>>> 1977.398: [GC[YG occupancy: 19849 K (118016 K)]1977.398: [Rescan
>>> (parallel) , 0.0018110 secs]1977.399: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 43902K(158108K), 0.0019100 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1977.400: [CMS-concurrent-sweep-start]
>>> 1977.403: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1977.403: [CMS-concurrent-reset-start]
>>> 1977.412: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1979.413: [GC [1 CMS-initial-mark: 24053K(40092K)] 44031K(158108K),
>>> 0.0030240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 1979.416: [CMS-concurrent-mark-start]
>>> 1979.434: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
>>> sys=0.00, real=0.02 secs]
>>> 1979.434: [CMS-concurrent-preclean-start]
>>> 1979.434: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1979.434: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1984.511:
>>> [CMS-concurrent-abortable-preclean: 0.112/5.077 secs] [Times:
>>> user=0.12 sys=0.00, real=5.07 secs]
>>> 1984.511: [GC[YG occupancy: 20556 K (118016 K)]1984.511: [Rescan
>>> (parallel) , 0.0032740 secs]1984.514: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 44609K(158108K), 0.0033720 secs]
>>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>>> 1984.515: [CMS-concurrent-sweep-start]
>>> 1984.518: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1984.518: [CMS-concurrent-reset-start]
>>> 1984.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1986.528: [GC [1 CMS-initial-mark: 24053K(40092K)] 44737K(158108K),
>>> 0.0032890 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1986.531: [CMS-concurrent-mark-start]
>>> 1986.548: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 1986.548: [CMS-concurrent-preclean-start]
>>> 1986.548: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1986.548: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1991.564:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.02 secs]
>>> 1991.564: [GC[YG occupancy: 21005 K (118016 K)]1991.564: [Rescan
>>> (parallel) , 0.0022540 secs]1991.566: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 45058K(158108K), 0.0023650 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 1991.566: [CMS-concurrent-sweep-start]
>>> 1991.570: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 1991.570: [CMS-concurrent-reset-start]
>>> 1991.579: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 1993.579: [GC [1 CMS-initial-mark: 24053K(40092K)] 45187K(158108K),
>>> 0.0032480 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 1993.583: [CMS-concurrent-mark-start]
>>> 1993.599: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 1993.599: [CMS-concurrent-preclean-start]
>>> 1993.600: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 1993.600: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 1998.688:
>>> [CMS-concurrent-abortable-preclean: 0.112/5.089 secs] [Times:
>>> user=0.10 sys=0.01, real=5.09 secs]
>>> 1998.689: [GC[YG occupancy: 21454 K (118016 K)]1998.689: [Rescan
>>> (parallel) , 0.0025510 secs]1998.691: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 45507K(158108K), 0.0026500 secs]
>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>> 1998.691: [CMS-concurrent-sweep-start]
>>> 1998.695: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 1998.695: [CMS-concurrent-reset-start]
>>> 1998.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 2000.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 45636K(158108K),
>>> 0.0033350 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2000.708: [CMS-concurrent-mark-start]
>>> 2000.726: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2000.726: [CMS-concurrent-preclean-start]
>>> 2000.726: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2000.726: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2005.742:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.01 secs]
>>> 2005.742: [GC[YG occupancy: 21968 K (118016 K)]2005.742: [Rescan
>>> (parallel) , 0.0027300 secs]2005.745: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 46021K(158108K), 0.0028560 secs]
>>> [Times: user=0.02 sys=0.01, real=0.01 secs]
>>> 2005.745: [CMS-concurrent-sweep-start]
>>> 2005.748: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2005.748: [CMS-concurrent-reset-start]
>>> 2005.757: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.01, real=0.01 secs]
>>> 2007.758: [GC [1 CMS-initial-mark: 24053K(40092K)] 46217K(158108K),
>>> 0.0033290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2007.761: [CMS-concurrent-mark-start]
>>> 2007.778: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 2007.778: [CMS-concurrent-preclean-start]
>>> 2007.778: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2007.778: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2012.794:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.02 secs]
>>> 2012.794: [GC[YG occupancy: 22421 K (118016 K)]2012.794: [Rescan
>>> (parallel) , 0.0036890 secs]2012.798: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 46474K(158108K), 0.0037910 secs]
>>> [Times: user=0.02 sys=0.01, real=0.00 secs]
>>> 2012.798: [CMS-concurrent-sweep-start]
>>> 2012.801: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2012.801: [CMS-concurrent-reset-start]
>>> 2012.810: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2012.980: [GC [1 CMS-initial-mark: 24053K(40092K)] 46547K(158108K),
>>> 0.0033990 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 2012.984: [CMS-concurrent-mark-start]
>>> 2013.004: [CMS-concurrent-mark: 0.019/0.020 secs] [Times: user=0.06
>>> sys=0.01, real=0.02 secs]
>>> 2013.004: [CMS-concurrent-preclean-start]
>>> 2013.005: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2013.005: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2018.020:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.01 secs]
>>> 2018.020: [GC[YG occupancy: 22867 K (118016 K)]2018.020: [Rescan
>>> (parallel) , 0.0025180 secs]2018.023: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 46920K(158108K), 0.0026190 secs]
>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>> 2018.023: [CMS-concurrent-sweep-start]
>>> 2018.026: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 2018.026: [CMS-concurrent-reset-start]
>>> 2018.036: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 2020.036: [GC [1 CMS-initial-mark: 24053K(40092K)] 47048K(158108K),
>>> 0.0034020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2020.039: [CMS-concurrent-mark-start]
>>> 2020.057: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2020.057: [CMS-concurrent-preclean-start]
>>> 2020.058: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2020.058: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2025.073:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 2025.073: [GC[YG occupancy: 23316 K (118016 K)]2025.073: [Rescan
>>> (parallel) , 0.0020110 secs]2025.075: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 47369K(158108K), 0.0021080 secs]
>>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>>> 2025.075: [CMS-concurrent-sweep-start]
>>> 2025.079: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2025.079: [CMS-concurrent-reset-start]
>>> 2025.088: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2027.088: [GC [1 CMS-initial-mark: 24053K(40092K)] 47498K(158108K),
>>> 0.0034100 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2027.092: [CMS-concurrent-mark-start]
>>> 2027.108: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2027.108: [CMS-concurrent-preclean-start]
>>> 2027.109: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2027.109: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2032.120:
>>> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
>>> user=0.10 sys=0.00, real=5.01 secs]
>>> 2032.120: [GC[YG occupancy: 23765 K (118016 K)]2032.120: [Rescan
>>> (parallel) , 0.0025970 secs]2032.123: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 47818K(158108K), 0.0026940 secs]
>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>> 2032.123: [CMS-concurrent-sweep-start]
>>> 2032.126: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 2032.126: [CMS-concurrent-reset-start]
>>> 2032.135: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 2034.136: [GC [1 CMS-initial-mark: 24053K(40092K)] 47951K(158108K),
>>> 0.0034720 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2034.139: [CMS-concurrent-mark-start]
>>> 2034.156: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2034.156: [CMS-concurrent-preclean-start]
>>> 2034.156: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2034.156: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2039.171:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 2039.172: [GC[YG occupancy: 24218 K (118016 K)]2039.172: [Rescan
>>> (parallel) , 0.0038590 secs]2039.176: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 48271K(158108K), 0.0039560 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 2039.176: [CMS-concurrent-sweep-start]
>>> 2039.179: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2039.179: [CMS-concurrent-reset-start]
>>> 2039.188: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2041.188: [GC [1 CMS-initial-mark: 24053K(40092K)] 48400K(158108K),
>>> 0.0035110 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2041.192: [CMS-concurrent-mark-start]
>>> 2041.209: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 2041.209: [CMS-concurrent-preclean-start]
>>> 2041.209: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2041.209: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2046.268:
>>> [CMS-concurrent-abortable-preclean: 0.108/5.058 secs] [Times:
>>> user=0.12 sys=0.00, real=5.06 secs]
>>> 2046.268: [GC[YG occupancy: 24813 K (118016 K)]2046.268: [Rescan
>>> (parallel) , 0.0042050 secs]2046.272: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 48866K(158108K), 0.0043070 secs]
>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>> 2046.272: [CMS-concurrent-sweep-start]
>>> 2046.275: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 2046.275: [CMS-concurrent-reset-start]
>>> 2046.285: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2048.285: [GC [1 CMS-initial-mark: 24053K(40092K)] 48994K(158108K),
>>> 0.0037700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2048.289: [CMS-concurrent-mark-start]
>>> 2048.307: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2048.307: [CMS-concurrent-preclean-start]
>>> 2048.307: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2048.307: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2053.323:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 2053.323: [GC[YG occupancy: 25262 K (118016 K)]2053.323: [Rescan
>>> (parallel) , 0.0030780 secs]2053.326: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 49315K(158108K), 0.0031760 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 2053.326: [CMS-concurrent-sweep-start]
>>> 2053.329: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2053.329: [CMS-concurrent-reset-start]
>>> 2053.338: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2055.339: [GC [1 CMS-initial-mark: 24053K(40092K)] 49444K(158108K),
>>> 0.0037730 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2055.343: [CMS-concurrent-mark-start]
>>> 2055.359: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 2055.359: [CMS-concurrent-preclean-start]
>>> 2055.360: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2055.360: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2060.373:
>>> [CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 2060.373: [GC[YG occupancy: 25715 K (118016 K)]2060.373: [Rescan
>>> (parallel) , 0.0037090 secs]2060.377: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 49768K(158108K), 0.0038110 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 2060.377: [CMS-concurrent-sweep-start]
>>> 2060.380: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2060.380: [CMS-concurrent-reset-start]
>>> 2060.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2062.390: [GC [1 CMS-initial-mark: 24053K(40092K)] 49897K(158108K),
>>> 0.0037860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2062.394: [CMS-concurrent-mark-start]
>>> 2062.410: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2062.410: [CMS-concurrent-preclean-start]
>>> 2062.411: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2062.411: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2067.426:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.02 secs]
>>> 2067.427: [GC[YG occupancy: 26231 K (118016 K)]2067.427: [Rescan
>>> (parallel) , 0.0031980 secs]2067.430: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 50284K(158108K), 0.0033100 secs]
>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>> 2067.430: [CMS-concurrent-sweep-start]
>>> 2067.433: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 2067.433: [CMS-concurrent-reset-start]
>>> 2067.443: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2069.443: [GC [1 CMS-initial-mark: 24053K(40092K)] 50412K(158108K),
>>> 0.0038060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 2069.447: [CMS-concurrent-mark-start]
>>> 2069.465: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2069.465: [CMS-concurrent-preclean-start]
>>> 2069.465: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2069.465: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2074.535:
>>> [CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
>>> user=0.12 sys=0.00, real=5.06 secs]
>>> 2074.535: [GC[YG occupancy: 26749 K (118016 K)]2074.535: [Rescan
>>> (parallel) , 0.0040450 secs]2074.539: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 50802K(158108K), 0.0041460 secs]
>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>> 2074.539: [CMS-concurrent-sweep-start]
>>> 2074.543: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2074.543: [CMS-concurrent-reset-start]
>>> 2074.552: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 2076.552: [GC [1 CMS-initial-mark: 24053K(40092K)] 50930K(158108K),
>>> 0.0038960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 2076.556: [CMS-concurrent-mark-start]
>>> 2076.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2076.575: [CMS-concurrent-preclean-start]
>>> 2076.575: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2076.575: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2081.590:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.014 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 2081.590: [GC[YG occupancy: 27198 K (118016 K)]2081.590: [Rescan
>>> (parallel) , 0.0042420 secs]2081.594: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 51251K(158108K), 0.0043450 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 2081.594: [CMS-concurrent-sweep-start]
>>> 2081.597: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2081.597: [CMS-concurrent-reset-start]
>>> 2081.607: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2083.607: [GC [1 CMS-initial-mark: 24053K(40092K)] 51447K(158108K),
>>> 0.0038630 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2083.611: [CMS-concurrent-mark-start]
>>> 2083.628: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2083.628: [CMS-concurrent-preclean-start]
>>> 2083.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2083.628: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2088.642:
>>> [CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 2088.642: [GC[YG occupancy: 27651 K (118016 K)]2088.642: [Rescan
>>> (parallel) , 0.0031520 secs]2088.645: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 51704K(158108K), 0.0032520 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 2088.645: [CMS-concurrent-sweep-start]
>>> 2088.649: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2088.649: [CMS-concurrent-reset-start]
>>> 2088.658: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2090.658: [GC [1 CMS-initial-mark: 24053K(40092K)] 51832K(158108K),
>>> 0.0039130 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2090.662: [CMS-concurrent-mark-start]
>>> 2090.678: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 2090.678: [CMS-concurrent-preclean-start]
>>> 2090.679: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2090.679: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2095.690:
>>> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 2095.690: [GC[YG occupancy: 28100 K (118016 K)]2095.690: [Rescan
>>> (parallel) , 0.0024460 secs]2095.693: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 52153K(158108K), 0.0025460 secs]
>>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>>> 2095.693: [CMS-concurrent-sweep-start]
>>> 2095.696: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 2095.696: [CMS-concurrent-reset-start]
>>> 2095.705: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 2096.616: [GC [1 CMS-initial-mark: 24053K(40092K)] 53289K(158108K),
>>> 0.0039340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2096.620: [CMS-concurrent-mark-start]
>>> 2096.637: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 2096.637: [CMS-concurrent-preclean-start]
>>> 2096.638: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2096.638: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2101.654:
>>> [CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.01 secs]
>>> 2101.654: [GC[YG occupancy: 29557 K (118016 K)]2101.654: [Rescan
>>> (parallel) , 0.0034020 secs]2101.657: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 53610K(158108K), 0.0035000 secs]
>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>> 2101.657: [CMS-concurrent-sweep-start]
>>> 2101.661: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2101.661: [CMS-concurrent-reset-start]
>>> 2101.670: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2103.004: [GC [1 CMS-initial-mark: 24053K(40092K)] 53997K(158108K),
>>> 0.0042590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2103.009: [CMS-concurrent-mark-start]
>>> 2103.027: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2103.027: [CMS-concurrent-preclean-start]
>>> 2103.028: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2103.028: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2108.043:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.10 sys=0.01, real=5.02 secs]
>>> 2108.043: [GC[YG occupancy: 30385 K (118016 K)]2108.044: [Rescan
>>> (parallel) , 0.0048950 secs]2108.048: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 54438K(158108K), 0.0049930 secs]
>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>> 2108.049: [CMS-concurrent-sweep-start]
>>> 2108.052: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2108.052: [CMS-concurrent-reset-start]
>>> 2108.061: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2110.062: [GC [1 CMS-initial-mark: 24053K(40092K)] 54502K(158108K),
>>> 0.0042120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>>> 2110.066: [CMS-concurrent-mark-start]
>>> 2110.084: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2110.084: [CMS-concurrent-preclean-start]
>>> 2110.085: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2110.085: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2115.100:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 2115.101: [GC[YG occupancy: 30770 K (118016 K)]2115.101: [Rescan
>>> (parallel) , 0.0049040 secs]2115.106: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 54823K(158108K), 0.0050080 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 2115.106: [CMS-concurrent-sweep-start]
>>> 2115.109: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2115.109: [CMS-concurrent-reset-start]
>>> 2115.118: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2117.118: [GC [1 CMS-initial-mark: 24053K(40092K)] 54952K(158108K),
>>> 0.0042490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2117.123: [CMS-concurrent-mark-start]
>>> 2117.139: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 2117.139: [CMS-concurrent-preclean-start]
>>> 2117.140: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2117.140: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2122.155:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.02 secs]
>>> 2122.155: [GC[YG occupancy: 31219 K (118016 K)]2122.155: [Rescan
>>> (parallel) , 0.0036460 secs]2122.159: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 55272K(158108K), 0.0037440 secs]
>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>> 2122.159: [CMS-concurrent-sweep-start]
>>> 2122.162: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2122.162: [CMS-concurrent-reset-start]
>>> 2122.172: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2124.172: [GC [1 CMS-initial-mark: 24053K(40092K)] 55401K(158108K),
>>> 0.0043010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 2124.176: [CMS-concurrent-mark-start]
>>> 2124.195: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2124.195: [CMS-concurrent-preclean-start]
>>> 2124.195: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2124.195: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2129.211:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.12 sys=0.00, real=5.01 secs]
>>> 2129.211: [GC[YG occupancy: 31669 K (118016 K)]2129.211: [Rescan
>>> (parallel) , 0.0049870 secs]2129.216: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 55722K(158108K), 0.0050860 secs]
>>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>>> 2129.216: [CMS-concurrent-sweep-start]
>>> 2129.219: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 2129.219: [CMS-concurrent-reset-start]
>>> 2129.228: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2131.229: [GC [1 CMS-initial-mark: 24053K(40092K)] 55850K(158108K),
>>> 0.0042340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2131.233: [CMS-concurrent-mark-start]
>>> 2131.249: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2131.249: [CMS-concurrent-preclean-start]
>>> 2131.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2131.249: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2136.292:
>>> [CMS-concurrent-abortable-preclean: 0.108/5.042 secs] [Times:
>>> user=0.11 sys=0.00, real=5.04 secs]
>>> 2136.292: [GC[YG occupancy: 32174 K (118016 K)]2136.292: [Rescan
>>> (parallel) , 0.0037250 secs]2136.296: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 56227K(158108K), 0.0038250 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 2136.296: [CMS-concurrent-sweep-start]
>>> 2136.299: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2136.299: [CMS-concurrent-reset-start]
>>> 2136.308: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2138.309: [GC [1 CMS-initial-mark: 24053K(40092K)] 56356K(158108K),
>>> 0.0043040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2138.313: [CMS-concurrent-mark-start]
>>> 2138.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
>>> sys=0.01, real=0.02 secs]
>>> 2138.329: [CMS-concurrent-preclean-start]
>>> 2138.329: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2138.329: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2143.341:
>>> [CMS-concurrent-abortable-preclean: 0.106/5.011 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 2143.341: [GC[YG occupancy: 32623 K (118016 K)]2143.341: [Rescan
>>> (parallel) , 0.0038660 secs]2143.345: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 56676K(158108K), 0.0039760 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 2143.345: [CMS-concurrent-sweep-start]
>>> 2143.349: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2143.349: [CMS-concurrent-reset-start]
>>> 2143.358: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2145.358: [GC [1 CMS-initial-mark: 24053K(40092K)] 56805K(158108K),
>>> 0.0043390 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2145.362: [CMS-concurrent-mark-start]
>>> 2145.379: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 2145.379: [CMS-concurrent-preclean-start]
>>> 2145.379: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2145.379: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2150.393:
>>> [CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 2150.393: [GC[YG occupancy: 33073 K (118016 K)]2150.393: [Rescan
>>> (parallel) , 0.0038190 secs]2150.397: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 57126K(158108K), 0.0039210 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 2150.397: [CMS-concurrent-sweep-start]
>>> 2150.400: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2150.400: [CMS-concurrent-reset-start]
>>> 2150.410: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2152.410: [GC [1 CMS-initial-mark: 24053K(40092K)] 57254K(158108K),
>>> 0.0044080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2152.415: [CMS-concurrent-mark-start]
>>> 2152.431: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 2152.431: [CMS-concurrent-preclean-start]
>>> 2152.432: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2152.432: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2157.447:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.01, real=5.02 secs]
>>> 2157.447: [GC[YG occupancy: 33522 K (118016 K)]2157.447: [Rescan
>>> (parallel) , 0.0038130 secs]2157.451: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 57575K(158108K), 0.0039160 secs]
>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>> 2157.451: [CMS-concurrent-sweep-start]
>>> 2157.454: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 2157.454: [CMS-concurrent-reset-start]
>>> 2157.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 2159.464: [GC [1 CMS-initial-mark: 24053K(40092K)] 57707K(158108K),
>>> 0.0045170 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2159.469: [CMS-concurrent-mark-start]
>>> 2159.483: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>>> sys=0.00, real=0.01 secs]
>>> 2159.483: [CMS-concurrent-preclean-start]
>>> 2159.483: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2159.483: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2164.491:
>>> [CMS-concurrent-abortable-preclean: 0.111/5.007 secs] [Times:
>>> user=0.12 sys=0.00, real=5.01 secs]
>>> 2164.491: [GC[YG occupancy: 34293 K (118016 K)]2164.491: [Rescan
>>> (parallel) , 0.0052070 secs]2164.496: [weak refs processing, 0.0000120
>>> secs] [1 CMS-remark: 24053K(40092K)] 58347K(158108K), 0.0053130 secs]
>>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>>> 2164.496: [CMS-concurrent-sweep-start]
>>> 2164.500: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2164.500: [CMS-concurrent-reset-start]
>>> 2164.509: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.01, real=0.01 secs]
>>> 2166.509: [GC [1 CMS-initial-mark: 24053K(40092K)] 58475K(158108K),
>>> 0.0045900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2166.514: [CMS-concurrent-mark-start]
>>> 2166.533: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
>>> sys=0.00, real=0.02 secs]
>>> 2166.533: [CMS-concurrent-preclean-start]
>>> 2166.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2166.533: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2171.549:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.02 secs]
>>> 2171.549: [GC[YG occupancy: 34743 K (118016 K)]2171.549: [Rescan
>>> (parallel) , 0.0052200 secs]2171.554: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 58796K(158108K), 0.0053210 secs]
>>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>>> 2171.554: [CMS-concurrent-sweep-start]
>>> 2171.558: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2171.558: [CMS-concurrent-reset-start]
>>> 2171.567: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2173.567: [GC [1 CMS-initial-mark: 24053K(40092K)] 58924K(158108K),
>>> 0.0046700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>>> 2173.572: [CMS-concurrent-mark-start]
>>> 2173.588: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>>> sys=0.00, real=0.02 secs]
>>> 2173.588: [CMS-concurrent-preclean-start]
>>> 2173.589: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2173.589: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2178.604:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.10 sys=0.01, real=5.02 secs]
>>> 2178.605: [GC[YG occupancy: 35192 K (118016 K)]2178.605: [Rescan
>>> (parallel) , 0.0041460 secs]2178.609: [weak refs processing, 0.0000110
>>> secs] [1 CMS-remark: 24053K(40092K)] 59245K(158108K), 0.0042450 secs]
>>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>>> 2178.609: [CMS-concurrent-sweep-start]
>>> 2178.612: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
>>> sys=0.00, real=0.00 secs]
>>> 2178.612: [CMS-concurrent-reset-start]
>>> 2178.622: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>>> sys=0.00, real=0.01 secs]
>>> 2180.622: [GC [1 CMS-initial-mark: 24053K(40092K)] 59373K(158108K),
>>> 0.0047200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>>> 2180.627: [CMS-concurrent-mark-start]
>>> 2180.645: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
>>> sys=0.00, real=0.02 secs]
>>> 2180.645: [CMS-concurrent-preclean-start]
>>> 2180.645: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>>> user=0.00 sys=0.00, real=0.00 secs]
>>> 2180.645: [CMS-concurrent-abortable-preclean-start]
>>> CMS: abort preclean due to time 2185.661:
>>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>>> user=0.11 sys=0.00, real=5.01 secs]
>>> 2185.661: [GC[YG occupancy: 35645 K (118016 K)]2185.661: [Rescan
>>> (parallel) , 0.0050730 secs]2185.666: [weak refs processing, 0.0000100
>>> secs] [1 CMS-remark: 24053K(40092K)] 59698K(158108K), 0.0051720 secs]
>>> [Times: user=0.04 sys=0.01, real=0.01 secs]
>>> 2185.666: [CMS-concurrent-sweep-start]
>>> 2185.670: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>>> sys=0.00, real=0.00 secs]
>>> 2185.670: [CMS-concurrent-reset-start]
>>> 2185.679: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>>> sys=0.00, real=0.01 secs]
>>> 2187.679: [GC [1 CMS-initial-mark: 24053K(40092K)] 59826K(158108K),
>>> 0.0047350 secs]
>>> 
>>> --
>>> gregross:)
>>> 
>> 
>> --
>> 
>> 
> 
> 
> 
> -- 
> gregross:)
> 


Re: long garbage collecting pause

Posted by Greg Ross <gr...@ngmoco.com>.
Thank, Michael.

We have hbase.hregion.memstore.mslab.enabled = true but have left the
chunksize and max.allocation not set so I assume these are at their
default values.

Greg


On Mon, Oct 1, 2012 at 1:51 PM, Michael Segel <mi...@hotmail.com> wrote:
> Have you implemented MSLABS?
>
> On Oct 1, 2012, at 3:35 PM, Greg Ross <gr...@ngmoco.com> wrote:
>
>> Hi,
>>
>> I'm having difficulty with a mapreduce job that has reducers that read
>> from and write to HBase, version 0.92.1, r1298924. Row sizes vary
>> greatly. As do the number of cells, although the number of cells is
>> typically numbered in the tens, at most. The max cell size is 1MB.
>>
>> I see the following in the logs followed by the region server promptly
>> shutting down:
>>
>> 2012-10-01 19:08:47,858 [regionserver60020] WARN
>> org.apache.hadoop.hbase.util.Sleeper: We slept 28970ms instead of
>> 3000ms, this is likely due to a long garbage collecting pause and it's
>> usually bad, see
>> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
>>
>> The full logs, including GC are below.
>>
>> Although new to HBase, I've read up on the likely GC issues and their
>> remedies. I've implemented the recommended solutions and still to no
>> avail.
>>
>> Here's what I've tried:
>>
>> (1) increased the RAM to 4G
>> (2) set -XX:+UseConcMarkSweepGC
>> (3) set -XX:+UseParNewGC
>> (4) set -XX:CMSInitiatingOccupancyFraction=N where I've attempted N=[40..70]
>> (5) I've called context.progress() in the reducer before and after
>> reading and writing
>> (6) memstore is enabled
>>
>> Is there anything else that I might have missed?
>>
>> Thanks,
>>
>> Greg
>>
>>
>> hbase logs
>> ========
>>
>> 2012-10-01 19:09:48,293
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/.tmp/d2ee47650b224189b0c27d1c20929c03
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> 2012-10-01 19:09:48,884
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 5 file(s) in U of
>> orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
>> into d2ee47650b224189b0c27d1c20929c03, size=723.0m; total size for
>> store is 723.0m
>> 2012-10-01 19:09:48,884
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.,
>> storeName=U, fileCount=5, fileSize=1.4g, priority=2,
>> time=10631266687564968; duration=35sec
>> 2012-10-01 19:09:48,886
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>> 2012-10-01 19:09:48,887
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 5
>> file(s) in U of
>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp,
>> seqid=132201184, totalSize=1.4g
>> 2012-10-01 19:10:04,191
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp/2e5534fea8b24eaf9cc1e05dea788c01
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> 2012-10-01 19:10:04,868
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 5 file(s) in U of
>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>> into 2e5534fea8b24eaf9cc1e05dea788c01, size=626.5m; total size for
>> store is 626.5m
>> 2012-10-01 19:10:04,868
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
>> storeName=U, fileCount=5, fileSize=1.4g, priority=2,
>> time=10631266696614208; duration=15sec
>> 2012-10-01 19:14:04,992
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>> 2012-10-01 19:14:04,993
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp,
>> seqid=132198830, totalSize=863.8m
>> 2012-10-01 19:14:19,147
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp/b741f8501ad248418c48262d751f6e86
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/U/b741f8501ad248418c48262d751f6e86
>> 2012-10-01 19:14:19,381
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>> into b741f8501ad248418c48262d751f6e86, size=851.4m; total size for
>> store is 851.4m
>> 2012-10-01 19:14:19,381
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.,
>> storeName=U, fileCount=2, fileSize=863.8m, priority=5,
>> time=10631557965747111; duration=14sec
>> 2012-10-01 19:14:19,381
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>> 2012-10-01 19:14:19,381
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>> file(s) in U of
>> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp,
>> seqid=132198819, totalSize=496.7m
>> 2012-10-01 19:14:27,337
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp/78040c736c4149a884a1bdcda9916416
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/U/78040c736c4149a884a1bdcda9916416
>> 2012-10-01 19:14:27,514
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 3 file(s) in U of
>> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>> into 78040c736c4149a884a1bdcda9916416, size=487.5m; total size for
>> store is 487.5m
>> 2012-10-01 19:14:27,514
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.,
>> storeName=U, fileCount=3, fileSize=496.7m, priority=4,
>> time=10631557966599560; duration=8sec
>> 2012-10-01 19:14:27,514
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>> 2012-10-01 19:14:27,514
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>> file(s) in U of
>> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp,
>> seqid=132200816, totalSize=521.7m
>> 2012-10-01 19:14:36,962
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp/0142b8bcdda948c185887358990af6d1
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/U/0142b8bcdda948c185887358990af6d1
>> 2012-10-01 19:14:37,171
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 3 file(s) in U of
>> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>> into 0142b8bcdda948c185887358990af6d1, size=510.7m; total size for
>> store is 510.7m
>> 2012-10-01 19:14:37,171
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.,
>> storeName=U, fileCount=3, fileSize=521.7m, priority=4,
>> time=10631557967125617; duration=9sec
>> 2012-10-01 19:14:37,172
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>> 2012-10-01 19:14:37,172
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>> file(s) in U of
>> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp,
>> seqid=132198832, totalSize=565.5m
>> 2012-10-01 19:14:57,082
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp/44a27dce8df04306908579c22be76786
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/U/44a27dce8df04306908579c22be76786
>> 2012-10-01 19:14:57,429
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 3 file(s) in U of
>> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>> into 44a27dce8df04306908579c22be76786, size=557.7m; total size for
>> store is 557.7m
>> 2012-10-01 19:14:57,429
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.,
>> storeName=U, fileCount=3, fileSize=565.5m, priority=4,
>> time=10631557967207683; duration=20sec
>> 2012-10-01 19:14:57,429
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>> 2012-10-01 19:14:57,430
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>> file(s) in U of
>> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp,
>> seqid=132199414, totalSize=845.6m
>> 2012-10-01 19:16:54,394
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp/771813ba0c87449ebd99d5e7916244f8
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/U/771813ba0c87449ebd99d5e7916244f8
>> 2012-10-01 19:16:54,636
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 3 file(s) in U of
>> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>> into 771813ba0c87449ebd99d5e7916244f8, size=827.3m; total size for
>> store is 827.3m
>> 2012-10-01 19:16:54,636
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.,
>> storeName=U, fileCount=3, fileSize=845.6m, priority=4,
>> time=10631557967560440; duration=1mins, 57sec
>> 2012-10-01 19:16:54,636
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>> 2012-10-01 19:16:54,637
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>> file(s) in U of
>> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp,
>> seqid=132198824, totalSize=1012.4m
>> 2012-10-01 19:17:35,610
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp/771a4124c763468c8dea927cb53887ee
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/U/771a4124c763468c8dea927cb53887ee
>> 2012-10-01 19:17:35,874
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 3 file(s) in U of
>> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>> into 771a4124c763468c8dea927cb53887ee, size=974.0m; total size for
>> store is 974.0m
>> 2012-10-01 19:17:35,875
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.,
>> storeName=U, fileCount=3, fileSize=1012.4m, priority=4,
>> time=10631557967678796; duration=41sec
>> 2012-10-01 19:17:35,875
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>> 2012-10-01 19:17:35,875
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>> file(s) in U of
>> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp,
>> seqid=132198815, totalSize=530.5m
>> 2012-10-01 19:17:47,481
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp/24328f8244f747bf8ba81b74ef2893fa
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/U/24328f8244f747bf8ba81b74ef2893fa
>> 2012-10-01 19:17:47,741
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 3 file(s) in U of
>> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>> into 24328f8244f747bf8ba81b74ef2893fa, size=524.0m; total size for
>> store is 524.0m
>> 2012-10-01 19:17:47,741
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.,
>> storeName=U, fileCount=3, fileSize=530.5m, priority=4,
>> time=10631557967807915; duration=11sec
>> 2012-10-01 19:17:47,741
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>> 2012-10-01 19:17:47,741
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>> file(s) in U of
>> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp,
>> seqid=132201190, totalSize=529.3m
>> 2012-10-01 19:17:58,031
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp/cae48d1b96eb4440a7bcd5fa3b4c070b
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/U/cae48d1b96eb4440a7bcd5fa3b4c070b
>> 2012-10-01 19:17:58,232
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 3 file(s) in U of
>> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>> into cae48d1b96eb4440a7bcd5fa3b4c070b, size=521.3m; total size for
>> store is 521.3m
>> 2012-10-01 19:17:58,232
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.,
>> storeName=U, fileCount=3, fileSize=529.3m, priority=4,
>> time=10631557967959079; duration=10sec
>> 2012-10-01 19:17:58,232
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>> 2012-10-01 19:17:58,232
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
>> file(s) in U of
>> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp,
>> seqid=132199205, totalSize=475.2m
>> 2012-10-01 19:18:06,764
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp/ba51afdc860048b6b2e1047b06fb3b29
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/U/ba51afdc860048b6b2e1047b06fb3b29
>> 2012-10-01 19:18:07,065
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 3 file(s) in U of
>> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>> into ba51afdc860048b6b2e1047b06fb3b29, size=474.5m; total size for
>> store is 474.5m
>> 2012-10-01 19:18:07,065
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.,
>> storeName=U, fileCount=3, fileSize=475.2m, priority=4,
>> time=10631557968104570; duration=8sec
>> 2012-10-01 19:18:07,065
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>> 2012-10-01 19:18:07,065
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp,
>> seqid=132198822, totalSize=522.5m
>> 2012-10-01 19:18:18,306
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp/7a0bd16b11f34887b2690e9510071bf0
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/U/7a0bd16b11f34887b2690e9510071bf0
>> 2012-10-01 19:18:18,439
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>> into 7a0bd16b11f34887b2690e9510071bf0, size=520.0m; total size for
>> store is 520.0m
>> 2012-10-01 19:18:18,440
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.,
>> storeName=U, fileCount=2, fileSize=522.5m, priority=5,
>> time=10631557965863914; duration=11sec
>> 2012-10-01 19:18:18,440
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>> 2012-10-01 19:18:18,440
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp,
>> seqid=132198823, totalSize=548.0m
>> 2012-10-01 19:18:32,288
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp/dcd050acc2e747738a90aebaae8920e4
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/U/dcd050acc2e747738a90aebaae8920e4
>> 2012-10-01 19:18:32,431
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>> into dcd050acc2e747738a90aebaae8920e4, size=528.2m; total size for
>> store is 528.2m
>> 2012-10-01 19:18:32,431
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.,
>> storeName=U, fileCount=2, fileSize=548.0m, priority=5,
>> time=10631557966071838; duration=13sec
>> 2012-10-01 19:18:32,431
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>> 2012-10-01 19:18:32,431
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp,
>> seqid=132199001, totalSize=475.9m
>> 2012-10-01 19:18:43,154
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp/15a9167cd9754fd4b3674fe732648a03
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/U/15a9167cd9754fd4b3674fe732648a03
>> 2012-10-01 19:18:43,322
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>> into 15a9167cd9754fd4b3674fe732648a03, size=475.9m; total size for
>> store is 475.9m
>> 2012-10-01 19:18:43,322
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.,
>> storeName=U, fileCount=2, fileSize=475.9m, priority=5,
>> time=10631557966273447; duration=10sec
>> 2012-10-01 19:18:43,322
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>> 2012-10-01 19:18:43,322
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp,
>> seqid=132198833, totalSize=824.8m
>> 2012-10-01 19:19:00,252
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp/bf8da91da0824a909f684c3ecd0ee8da
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/U/bf8da91da0824a909f684c3ecd0ee8da
>> 2012-10-01 19:19:00,788
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>> into bf8da91da0824a909f684c3ecd0ee8da, size=803.0m; total size for
>> store is 803.0m
>> 2012-10-01 19:19:00,788
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.,
>> storeName=U, fileCount=2, fileSize=824.8m, priority=5,
>> time=10631557966382580; duration=17sec
>> 2012-10-01 19:19:00,788
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>> 2012-10-01 19:19:00,788
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp,
>> seqid=132198810, totalSize=565.3m
>> 2012-10-01 19:19:11,311
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp/5cd2032f48bc4287b8866165dcb6d3e6
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/U/5cd2032f48bc4287b8866165dcb6d3e6
>> 2012-10-01 19:19:11,504
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>> into 5cd2032f48bc4287b8866165dcb6d3e6, size=553.5m; total size for
>> store is 553.5m
>> 2012-10-01 19:19:11,504
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.,
>> storeName=U, fileCount=2, fileSize=565.3m, priority=5,
>> time=10631557966480961; duration=10sec
>> 2012-10-01 19:19:11,504
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>> 2012-10-01 19:19:11,504
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp,
>> seqid=132198825, totalSize=519.6m
>> 2012-10-01 19:19:22,186
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp/6f29b3b15f1747c196ac9aa79f4835b1
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/U/6f29b3b15f1747c196ac9aa79f4835b1
>> 2012-10-01 19:19:22,437
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>> into 6f29b3b15f1747c196ac9aa79f4835b1, size=512.7m; total size for
>> store is 512.7m
>> 2012-10-01 19:19:22,437
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.,
>> storeName=U, fileCount=2, fileSize=519.6m, priority=5,
>> time=10631557966769107; duration=10sec
>> 2012-10-01 19:19:22,437
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>> 2012-10-01 19:19:22,437
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp,
>> seqid=132198836, totalSize=528.3m
>> 2012-10-01 19:19:34,752
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp/d836630f7e2b4212848d7e4edc7238f1
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/U/d836630f7e2b4212848d7e4edc7238f1
>> 2012-10-01 19:19:34,945
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>> into d836630f7e2b4212848d7e4edc7238f1, size=504.3m; total size for
>> store is 504.3m
>> 2012-10-01 19:19:34,945
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.,
>> storeName=U, fileCount=2, fileSize=528.3m, priority=5,
>> time=10631557967026388; duration=12sec
>> 2012-10-01 19:19:34,945
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>> 2012-10-01 19:19:34,945
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp,
>> seqid=132198841, totalSize=813.8m
>> 2012-10-01 19:19:49,303
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp/c70692c971cd4e899957f9d5b189372e
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/U/c70692c971cd4e899957f9d5b189372e
>> 2012-10-01 19:19:49,428
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>> into c70692c971cd4e899957f9d5b189372e, size=813.7m; total size for
>> store is 813.7m
>> 2012-10-01 19:19:49,428
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.,
>> storeName=U, fileCount=2, fileSize=813.8m, priority=5,
>> time=10631557967436197; duration=14sec
>> 2012-10-01 19:19:49,428
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>> 2012-10-01 19:19:49,429
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp,
>> seqid=132198642, totalSize=812.0m
>> 2012-10-01 19:20:38,718
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp/bf99f97891ed42f7847a11cfb8f46438
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/U/bf99f97891ed42f7847a11cfb8f46438
>> 2012-10-01 19:20:38,825
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>> into bf99f97891ed42f7847a11cfb8f46438, size=811.3m; total size for
>> store is 811.3m
>> 2012-10-01 19:20:38,825
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.,
>> storeName=U, fileCount=2, fileSize=812.0m, priority=5,
>> time=10631557968183922; duration=49sec
>> 2012-10-01 19:20:38,826
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>> 2012-10-01 19:20:38,826
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp,
>> seqid=132198138, totalSize=588.7m
>> 2012-10-01 19:20:48,274
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp/9f44b7eeab58407ca98bb4ec90126035
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/U/9f44b7eeab58407ca98bb4ec90126035
>> 2012-10-01 19:20:48,383
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>> into 9f44b7eeab58407ca98bb4ec90126035, size=573.4m; total size for
>> store is 573.4m
>> 2012-10-01 19:20:48,383
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.,
>> storeName=U, fileCount=2, fileSize=588.7m, priority=5,
>> time=10631557968302831; duration=9sec
>> 2012-10-01 19:20:48,383
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>> 2012-10-01 19:20:48,383
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp,
>> seqid=132198644, totalSize=870.8m
>> 2012-10-01 19:21:04,998
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp/920844c25b1847c6ac4b880e8cf1d5b0
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/U/920844c25b1847c6ac4b880e8cf1d5b0
>> 2012-10-01 19:21:05,107
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>> into 920844c25b1847c6ac4b880e8cf1d5b0, size=869.0m; total size for
>> store is 869.0m
>> 2012-10-01 19:21:05,107
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.,
>> storeName=U, fileCount=2, fileSize=870.8m, priority=5,
>> time=10631557968521590; duration=16sec
>> 2012-10-01 19:21:05,107
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>> 2012-10-01 19:21:05,107
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp,
>> seqid=132198622, totalSize=885.3m
>> 2012-10-01 19:21:27,231
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp/c85d413975d642fc914253bd08f3484f
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/U/c85d413975d642fc914253bd08f3484f
>> 2012-10-01 19:21:27,791
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>> into c85d413975d642fc914253bd08f3484f, size=848.3m; total size for
>> store is 848.3m
>> 2012-10-01 19:21:27,791
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.,
>> storeName=U, fileCount=2, fileSize=885.3m, priority=5,
>> time=10631557968628383; duration=22sec
>> 2012-10-01 19:21:27,791
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
>> in region orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>> 2012-10-01 19:21:27,791
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
>> file(s) in U of
>> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp,
>> seqid=132198621, totalSize=796.5m
>> 2012-10-01 19:21:42,374
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
>> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp/ce543c630dd142309af6dca2a9ab5786
>> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/U/ce543c630dd142309af6dca2a9ab5786
>> 2012-10-01 19:21:42,515
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
>> of 2 file(s) in U of
>> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>> into ce543c630dd142309af6dca2a9ab5786, size=795.5m; total size for
>> store is 795.5m
>> 2012-10-01 19:21:42,516
>> [regionserver60020-largeCompactions-1348577979539] INFO
>> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
>> completed compaction:
>> regionName=orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.,
>> storeName=U, fileCount=2, fileSize=796.5m, priority=5,
>> time=10631557968713853; duration=14sec
>> 2012-10-01 19:49:58,159 [ResponseProcessor for block
>> blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor
>> exception  for block
>> blk_5535637699691880681_51616301java.io.EOFException
>>    at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>    at java.io.DataInputStream.readLong(DataInputStream.java:399)
>>    at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:120)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2634)
>>
>> 2012-10-01 19:49:58,167 [IPC Server handler 87 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
>> {"processingtimems":46208,"client":"10.100.102.155:38534","timeRange":[0,9223372036854775807],"starttimems":1349120951956,"responsesize":329939,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00322994","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
>> 2012-10-01 19:49:58,160
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
>> not heard from server in 56633ms for sessionid 0x137ec64368509f7,
>> closing socket connection and attempting reconnect
>> 2012-10-01 19:49:58,160 [regionserver60020] WARN
>> org.apache.hadoop.hbase.util.Sleeper: We slept 49116ms instead of
>> 3000ms, this is likely due to a long garbage collecting pause and it's
>> usually bad, see
>> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
>> 2012-10-01 19:49:58,160
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
>> not heard from server in 53359ms for sessionid 0x137ec64368509f6,
>> closing socket connection and attempting reconnect
>> 2012-10-01 19:49:58,320 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] INFO
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 waiting for responder to exit.
>> 2012-10-01 19:49:58,380 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>> 2012-10-01 19:49:58,380 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>> 10.100.101.156:50010
>> 2012-10-01 19:49:59,113 [regionserver60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: Unhandled
>> exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
>> rejected; currently processing
>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>> org.apache.hadoop.hbase.YouAreDeadException:
>> org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
>> currently processing
>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>>    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
>>    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:797)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:688)
>>    at java.lang.Thread.run(Thread.java:662)
>> Caused by: org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
>> currently processing
>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>>    at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:222)
>>    at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:148)
>>    at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:844)
>>    at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:918)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>    at $Proxy8.regionServerReport(Unknown Source)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:794)
>>    ... 2 more
>> 2012-10-01 19:49:59,114 [regionserver60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>> abort: loaded coprocessors are: []
>> 2012-10-01 19:49:59,397 [IPC Server handler 36 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
>> {"processingtimems":47521,"client":"10.100.102.176:60221","timeRange":[0,9223372036854775807],"starttimems":1349120951875,"responsesize":699312,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00318223","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
>> 2012-10-01 19:50:00,355 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
>> primary datanode 10.100.102.122:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:00,355
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
>> server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
>> 2012-10-01 19:50:00,356
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section
>> 'Client' could not be found. If you are not using SASL, you may ignore
>> this. On the other hand, if you expected SASL to work, please fix your
>> JAAS configuration.
>> 2012-10-01 19:50:00,356 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.122:50010 failed 1 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>> retry...
>> 2012-10-01 19:50:00,357
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>> session
>> 2012-10-01 19:50:00,358
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>> server; r-o mode will be unavailable
>> 2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
>> expired from ZooKeeper, aborting
>> org.apache.zookeeper.KeeperException$SessionExpiredException:
>> KeeperErrorCode = Session expired
>>    at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:374)
>>    at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:271)
>>    at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
>>    at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
>> 2012-10-01 19:50:00,359
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
>> service, session 0x137ec64368509f6 has expired, closing socket
>> connection
>> 2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>> abort: loaded coprocessors are: []
>> 2012-10-01 19:50:00,367 [regionserver60020-EventThread] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>> readRequestsCount=6744201, writeRequestsCount=904280,
>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>> blockCacheCount=5435, blockCacheHitCount=321294212,
>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>> hdfsBlocksLocalityIndex=97
>> 2012-10-01 19:50:00,367 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>> readRequestsCount=6744201, writeRequestsCount=904280,
>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>> blockCacheCount=5435, blockCacheHitCount=321294212,
>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>> hdfsBlocksLocalityIndex=97
>> 2012-10-01 19:50:00,381
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
>> server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
>> 2012-10-01 19:50:00,401 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unhandled
>> exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
>> rejected; currently processing
>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>> 2012-10-01 19:50:00,403
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section
>> 'Client' could not be found. If you are not using SASL, you may ignore
>> this. On the other hand, if you expected SASL to work, please fix your
>> JAAS configuration.
>> 2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
>> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
>> expired from ZooKeeper, aborting
>> 2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
>> org.apache.zookeeper.ClientCnxn: EventThread shut down
>> 2012-10-01 19:50:00,412 [regionserver60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020
>> 2012-10-01 19:50:00,413
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>> session
>> 2012-10-01 19:50:00,413 [IPC Server handler 9 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
>> exiting
>> 2012-10-01 19:50:00,413 [IPC Server handler 20 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
>> exiting
>> 2012-10-01 19:50:00,413 [IPC Server handler 2 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
>> exiting
>> 2012-10-01 19:50:00,413 [IPC Server handler 10 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
>> exiting
>> 2012-10-01 19:50:00,413 [IPC Server listener on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on
>> 60020
>> 2012-10-01 19:50:00,413 [IPC Server handler 12 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 21 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 13 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 19 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 22 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 11 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.SplitLogWorker: Sending interrupt
>> to stop the worker thread
>> 2012-10-01 19:50:00,414 [IPC Server handler 6 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Stopping
>> infoServer
>> 2012-10-01 19:50:00,414 [IPC Server handler 0 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 28 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 7 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
>> exiting
>> 2012-10-01 19:50:00,413 [IPC Server handler 15 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 5 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 48 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
>> exiting
>> 2012-10-01 19:50:00,413 [IPC Server handler 14 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
>> exiting
>> 2012-10-01 19:50:00,413 [IPC Server handler 18 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 37 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 47 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 50 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 45 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 36 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 43 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 42 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 38 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 8 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 40 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 34 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 4 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
>> exiting
>> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@5fa9b60a,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320394"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.117:56438: output error
>> 2012-10-01 19:50:00,414 [IPC Server handler 61 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 1768076108943205533:java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59104
>> remote=/10.100.101.156:50010]. 59988 millis timeout left.
>>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1243)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020
>> caught: java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 31 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414
>> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
>> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
>> SplitLogWorker interrupted while waiting for task, exiting:
>> java.lang.InterruptedException
>> 2012-10-01 19:50:00,563
>> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
>> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
>> SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
>> exiting
>> 2012-10-01 19:50:00,414 [IPC Server handler 1 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 3201413024070455305:java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59115
>> remote=/10.100.101.156:50010]. 59999 millis timeout left.
>>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>    at java.io.DataInputStream.readShort(DataInputStream.java:295)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,563 [IPC Server Responder] INFO
>> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
>> 2012-10-01 19:50:00,414 [IPC Server handler 27 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
>> exiting
>> 2012-10-01 19:50:00,414
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>> server; r-o mode will be unavailable
>> 2012-10-01 19:50:00,414 [IPC Server handler 55 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block -2144655386884254555:java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59108
>> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1350)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,649
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
>> service, session 0x137ec64368509f7 has expired, closing socket
>> connection
>> 2012-10-01 19:50:00,414 [IPC Server handler 39 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.173:50010 for file
>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
>> for block -2100467641393578191:java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:48825
>> remote=/10.100.102.173:50010]. 60000 millis timeout left.
>>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,414 [IPC Server handler 26 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block -5183799322211896791:java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59078
>> remote=/10.100.101.156:50010]. 59949 millis timeout left.
>>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,414 [IPC Server handler 85 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block -5183799322211896791:java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59082
>> remote=/10.100.101.156:50010]. 59950 millis timeout left.
>>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,414 [IPC Server handler 57 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block -1763662403960466408:java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59116
>> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>    at java.io.DataInputStream.readShort(DataInputStream.java:295)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,649 [IPC Server handler 79 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,649 [regionserver60020-EventThread] INFO
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>> This client just lost it's session with ZooKeeper, trying to
>> reconnect.
>> 2012-10-01 19:50:00,649 [IPC Server handler 89 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,649 [PRI IPC Server handler 3 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
>> exiting
>> 2012-10-01 19:50:00,649 [PRI IPC Server handler 0 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
>> exiting
>> 2012-10-01 19:50:00,700 [IPC Server handler 56 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
>> exiting
>> 2012-10-01 19:50:00,649 [PRI IPC Server handler 2 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
>> exiting
>> 2012-10-01 19:50:00,701 [IPC Server handler 54 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
>> exiting
>> 2012-10-01 19:50:00,563 [IPC Server Responder] INFO
>> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
>> 2012-10-01 19:50:00,701 [IPC Server handler 71 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
>> exiting
>> 2012-10-01 19:50:00,701 [IPC Server handler 79 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.193:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,563 [IPC Server handler 16 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 9 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
>> exiting
>> 2012-10-01 19:50:00,563 [IPC Server handler 17 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,415 [IPC Server handler 60 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@7eee7b96,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321525"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.125:49043: output error
>> 2012-10-01 19:50:00,704 [IPC Server handler 3 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 6550563574061266649:java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,717 [IPC Server handler 49 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
>> exiting
>> 2012-10-01 19:50:00,717 [IPC Server handler 94 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
>> exiting
>> 2012-10-01 19:50:00,701 [IPC Server handler 83 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
>> exiting
>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 1 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
>> exiting
>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 7 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
>> exiting
>> 2012-10-01 19:50:00,701 [IPC Server handler 82 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
>> exiting
>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 6 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
>> exiting
>> 2012-10-01 19:50:00,719 [IPC Server handler 16 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.107:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,701 [IPC Server handler 74 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
>> exiting
>> 2012-10-01 19:50:00,719 [IPC Server handler 86 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
>> exiting
>> 2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020
>> caught: java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
>> exiting
>> 2012-10-01 19:50:00,701 [PRI IPC Server handler 5 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
>> exiting
>> 2012-10-01 19:50:00,701 [regionserver60020] INFO org.mortbay.log:
>> Stopped SelectChannelConnector@0.0.0.0:60030
>> 2012-10-01 19:50:00,722 [IPC Server handler 35 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
>> exiting
>> 2012-10-01 19:50:00,722 [IPC Server handler 16 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.133:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,722 [IPC Server handler 98 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
>> exiting
>> 2012-10-01 19:50:00,701 [IPC Server handler 68 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
>> exiting
>> 2012-10-01 19:50:00,701 [IPC Server handler 64 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
>> exiting
>> 2012-10-01 19:50:00,673 [IPC Server handler 33 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
>> exiting
>> 2012-10-01 19:50:00,736 [IPC Server handler 76 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
>> exiting
>> 2012-10-01 19:50:00,673 [regionserver60020-EventThread] INFO
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>> Trying to reconnect to zookeeper
>> 2012-10-01 19:50:00,736 [IPC Server handler 84 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
>> exiting
>> 2012-10-01 19:50:00,736 [IPC Server handler 95 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
>> exiting
>> 2012-10-01 19:50:00,736 [IPC Server handler 75 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
>> exiting
>> 2012-10-01 19:50:00,736 [IPC Server handler 92 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
>> exiting
>> 2012-10-01 19:50:00,736 [IPC Server handler 88 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
>> exiting
>> 2012-10-01 19:50:00,736 [IPC Server handler 67 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
>> exiting
>> 2012-10-01 19:50:00,736 [IPC Server handler 30 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
>> exiting
>> 2012-10-01 19:50:00,736 [IPC Server handler 80 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
>> exiting
>> 2012-10-01 19:50:00,736 [IPC Server handler 62 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
>> exiting
>> 2012-10-01 19:50:00,722 [IPC Server handler 52 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
>> exiting
>> 2012-10-01 19:50:00,722 [IPC Server handler 32 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
>> exiting
>> 2012-10-01 19:50:00,722 [IPC Server handler 97 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
>> exiting
>> 2012-10-01 19:50:00,722 [IPC Server handler 96 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
>> exiting
>> 2012-10-01 19:50:00,722 [IPC Server handler 93 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
>> exiting
>> 2012-10-01 19:50:00,722 [IPC Server handler 73 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
>> exiting
>> 2012-10-01 19:50:00,722 [IPC Server handler 17 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.47:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,722 [IPC Server handler 87 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
>> exiting
>> 2012-10-01 19:50:00,721 [IPC Server handler 81 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
>> exiting
>> 2012-10-01 19:50:00,721 [IPC Server handler 90 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,721 [IPC Server handler 59 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>> for block -9081461281107361903:java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,719 [IPC Server handler 65 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
>> exiting
>> 2012-10-01 19:50:00,721 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 8387547514055202675:java.nio.channels.ClosedChannelException
>>    at java.nio.channels.spi.AbstractSelectableChannel.configureBlocking(AbstractSelectableChannel.java:252)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.<init>(SocketIOWithTimeout.java:66)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.<init>(SocketInputStream.java:50)
>>    at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:73)
>>    at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:91)
>>    at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:323)
>>    at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:299)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1474)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,721 [IPC Server handler 66 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 1768076108943205533:java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59074
>> remote=/10.100.101.156:50010]. 59947 millis timeout left.
>>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,811 [IPC Server handler 59 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.135:50010 for file
>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>> for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,719 [IPC Server handler 58 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59107
>> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,831 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.153:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,719 [IPC Server handler 39 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.144:50010 for file
>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
>> for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,719 [IPC Server handler 26 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.138:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,852 [IPC Server handler 66 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.174:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,719 [IPC Server handler 41 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>> for block 5946486101046455013:java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59091
>> remote=/10.100.101.156:50010]. 59953 millis timeout left.
>>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,719 [IPC Server handler 1 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.148:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,719 [IPC Server handler 53 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
>> exiting
>> 2012-10-01 19:50:00,719 [IPC Server handler 79 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.154:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,719 [IPC Server handler 89 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.47:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,719 [IPC Server handler 46 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 4946845190538507957:java.io.InterruptedIOException:
>> Interruped while waiting for IO on channel
>> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59113
>> remote=/10.100.101.156:50010]. 59999 millis timeout left.
>>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>    at java.io.DataInputStream.readShort(DataInputStream.java:295)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,895 [IPC Server handler 26 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.139:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,701 [IPC Server handler 91 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
>> exiting
>> 2012-10-01 19:50:00,717 [IPC Server handler 3 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.114:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 6550563574061266649:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.134:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,717 [PRI IPC Server handler 4 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
>> exiting
>> 2012-10-01 19:50:00,717 [IPC Server handler 77 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
>> exiting
>> 2012-10-01 19:50:00,717 [PRI IPC Server handler 8 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
>> exiting
>> 2012-10-01 19:50:00,717 [IPC Server handler 99 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
>> exiting
>> 2012-10-01 19:50:00,717 [IPC Server handler 85 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.138:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,717 [IPC Server handler 51 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
>> exiting
>> 2012-10-01 19:50:00,717 [IPC Server handler 57 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.138:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,717 [IPC Server handler 55 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.180:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,717 [IPC Server handler 70 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
>> exiting
>> 2012-10-01 19:50:00,717 [IPC Server handler 61 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.174:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.173:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,705 [IPC Server handler 23 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,705 [IPC Server handler 24 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 2851854722247682142:java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,704 [IPC Server handler 25 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>>    at java.io.DataInputStream.read(DataInputStream.java:132)
>>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.97:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.144:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,904 [regionserver60020-EventThread] INFO
>> org.apache.zookeeper.ZooKeeper: Initiating client connection,
>> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
>> sessionTimeout=180000 watcher=hconnection
>> 2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.72:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_-2144655386884254555_51616216 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,904 [IPC Server handler 57 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.144:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,901 [IPC Server handler 85 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_-5183799322211896791_51616591 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_5937357897784147544_51616546 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,899 [IPC Server handler 3 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_6550563574061266649_51616152 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,896 [IPC Server handler 46 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_4946845190538507957_51616628 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,896 [IPC Server handler 41 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.133:50010 for file
>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,896 [IPC Server handler 26 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_-5183799322211896791_51616591 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,896 [IPC Server handler 1 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.175:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,895 [IPC Server handler 66 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.97:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,894 [IPC Server handler 39 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.151:50010 for file
>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
>> for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,894 [IPC Server handler 79 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_2209451090614340242_51616188 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,857 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.101:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,856 [IPC Server handler 58 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.134:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,839 [IPC Server handler 59 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.194:50010 for file
>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>> for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,811 [IPC Server handler 16 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_4946845190538507957_51616628 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,787 [IPC Server handler 90 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.134:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,780 [IPC Server handler 17 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.134:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,736 [IPC Server handler 63 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
>> exiting
>> 2012-10-01 19:50:00,736 [IPC Server handler 72 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
>> exiting
>> 2012-10-01 19:50:00,736 [IPC Server handler 78 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
>> exiting
>> 2012-10-01 19:50:00,906 [IPC Server handler 59 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_-9081461281107361903_51616031 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,906 [IPC Server handler 39 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_-2100467641393578191_51531005 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,906 [IPC Server handler 41 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.145:50010 for file
>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,905 [IPC Server handler 57 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_-1763662403960466408_51616605 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.162:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_-1763662403960466408_51616605 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,904 [IPC Server handler 24 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.72:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_1768076108943205533_51616106 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:00,941 [regionserver60020-SendThread()] INFO
>> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
>> /10.100.102.197:2181
>> 2012-10-01 19:50:00,941 [regionserver60020-EventThread] INFO
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
>> of this process is 20776@data3024.ngpipes.milp.ngmoco.com
>> 2012-10-01 19:50:00,942
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section
>> 'Client' could not be found. If you are not using SASL, you may ignore
>> this. On the other hand, if you expected SASL to work, please fix your
>> JAAS configuration.
>> 2012-10-01 19:50:00,943
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>> session
>> 2012-10-01 19:50:00,962
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>> server; r-o mode will be unavailable
>> 2012-10-01 19:50:00,962
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
>> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
>> sessionid = 0x137ec64373dd4b3, negotiated timeout = 40000
>> 2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>> Reconnected successfully. This disconnect could have been caused by a
>> network partition or a long-running GC pause, either way it's
>> recommended that you verify your environment.
>> 2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
>> org.apache.zookeeper.ClientCnxn: EventThread shut down
>> 2012-10-01 19:50:01,018 [IPC Server handler 41 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,018 [IPC Server handler 24 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.133:50010 for file
>> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
>> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>>    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_5946486101046455013_51616031 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:01,020 [IPC Server handler 25 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.162:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,021 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,023 [IPC Server handler 90 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.47:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,023 [IPC Server handler 17 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.47:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,024 [IPC Server handler 66 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.174:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,024 [IPC Server handler 61 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@20c6e4bc,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321393"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.118:57165: output error
>> 2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:01,038 [IPC Server handler 58 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.134:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
>> exiting
>> 2012-10-01 19:50:01,038 [IPC Server handler 1 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.148:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.97:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.153:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_1768076108943205533_51616106 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.102.101:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,041 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.156:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,042 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.153:50010 for file
>> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,044 [IPC Server handler 1 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
>> /10.100.101.175:50010 for file
>> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>>
>> 2012-10-01 19:50:01,090 [IPC Server handler 29 on 60020] ERROR
>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
>> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03,
>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>> [cacheCompressed=false],
>> firstKey=00321084/U:BAHAMUTIOS_1/1348883706322/Put,
>> lastKey=00324324/U:user/1348900694793/Put, avgKeyLen=31,
>> avgValueLen=125185, entries=6053, length=758129544,
>> cur=00321312/U:KINGDOMSQUESTSIPAD_2/1349024761759/Put/vlen=460950]
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Could not obtain block:
>> blk_8387547514055202675_51616042
>> file=/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    ... 17 more
>> 2012-10-01 19:50:01,091 [IPC Server handler 24 on 60020] ERROR
>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>> [cacheCompressed=false],
>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>> avgValueLen=89140, entries=7365, length=656954017,
>> cur=00318964/U:user/1349118541276/Put/vlen=311046]
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Could not obtain block:
>> blk_2851854722247682142_51616579
>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    ... 14 more
>> 2012-10-01 19:50:01,091 [IPC Server handler 1 on 60020] ERROR
>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>> [cacheCompressed=false],
>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>> avgValueLen=89140, entries=7365, length=656954017,
>> cur=0032027/U:KINGDOMSQUESTS_10/1349118531396/Put/vlen=401149]
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Could not obtain block:
>> blk_3201413024070455305_51616611
>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    ... 14 more
>> 2012-10-01 19:50:01,091 [IPC Server handler 25 on 60020] ERROR
>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>> [cacheCompressed=false],
>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>> avgValueLen=89140, entries=7365, length=656954017,
>> cur=00319173/U:TINYTOWERANDROID_3/1349024232716/Put/vlen=129419]
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Could not obtain block:
>> blk_2851854722247682142_51616579
>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    ... 14 more
>> 2012-10-01 19:50:01,091 [IPC Server handler 90 on 60020] ERROR
>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>> [cacheCompressed=false],
>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>> avgValueLen=89140, entries=7365, length=656954017,
>> cur=00316914/U:PETCAT_2/1349118542022/Put/vlen=499140]
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Could not obtain block:
>> blk_5937357897784147544_51616546
>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    ... 14 more
>> 2012-10-01 19:50:01,091 [IPC Server handler 17 on 60020] ERROR
>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
>> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>> [cacheCompressed=false],
>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>> avgValueLen=89140, entries=7365, length=656954017,
>> cur=00317054/U:BAHAMUTIOS_4/1348869430278/Put/vlen=104012]
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Could not obtain block:
>> blk_5937357897784147544_51616546
>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    ... 17 more
>> 2012-10-01 19:50:01,091 [IPC Server handler 58 on 60020] ERROR
>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
>> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>> [cacheCompressed=false],
>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>> avgValueLen=89140, entries=7365, length=656954017,
>> cur=00316983/U:TINYTOWERANDROID_1/1349118439250/Put/vlen=417924]
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Could not obtain block:
>> blk_5937357897784147544_51616546
>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>>    ... 14 more
>> 2012-10-01 19:50:01,091 [IPC Server handler 89 on 60020] ERROR
>> org.apache.hadoop.hbase.regionserver.HRegionServer:
>> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
>> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
>> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
>> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
>> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
>> [cacheCompressed=false],
>> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
>> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
>> avgValueLen=89140, entries=7365, length=656954017,
>> cur=00317043/U:BAHAMUTANDROID_7/1348968079952/Put/vlen=419212]
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Could not obtain block:
>> blk_5937357897784147544_51616546
>> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>>    ... 17 more
>> 2012-10-01 19:50:01,094 [IPC Server handler 58 on 60020] WARN
>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>> server
>> java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> 2012-10-01 19:50:01,094 [IPC Server handler 17 on 60020] WARN
>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>> server
>> java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> 2012-10-01 19:50:01,093 [IPC Server handler 90 on 60020] WARN
>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>> server
>> java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> 2012-10-01 19:50:01,093 [IPC Server handler 25 on 60020] WARN
>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>> server
>> java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> 2012-10-01 19:50:01,092 [IPC Server handler 1 on 60020] WARN
>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>> server
>> java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> 2012-10-01 19:50:01,092 [IPC Server handler 24 on 60020] WARN
>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>> server
>> java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> 2012-10-01 19:50:01,091 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
>> server
>> java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> 2012-10-01 19:50:01,095 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>> 2012-10-01 19:50:01,097 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>> 10.100.101.156:50010
>> 2012-10-01 19:50:01,115 [IPC Server handler 39 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@2743ecf8,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00390925"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.122:51758: output error
>> 2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
>> exiting
>> 2012-10-01 19:50:01,151 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
>> primary datanode 10.100.102.122:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:01,151 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.122:50010 failed 2 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>> retry...
>> 2012-10-01 19:50:01,153 [IPC Server handler 89 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@7137feec,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317043"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.68:55302: output error
>> 2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
>> exiting
>> 2012-10-01 19:50:01,156 [IPC Server handler 66 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@6b9a9eba,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321504"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.176:32793: output error
>> 2012-10-01 19:50:01,157 [IPC Server handler 66 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:01,158 [IPC Server handler 66 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
>> exiting
>> 2012-10-01 19:50:01,159 [IPC Server handler 41 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@586761c,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00391525"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.155:39850: output error
>> 2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
>> exiting
>> 2012-10-01 19:50:01,216 [regionserver60020.compactionChecker] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer$CompactionChecker:
>> regionserver60020.compactionChecker exiting
>> 2012-10-01 19:50:01,216 [regionserver60020.logRoller] INFO
>> org.apache.hadoop.hbase.regionserver.LogRoller: LogRoller exiting.
>> 2012-10-01 19:50:01,216 [regionserver60020.cacheFlusher] INFO
>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
>> regionserver60020.cacheFlusher exiting
>> 2012-10-01 19:50:01,217 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: aborting server
>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
>> 2012-10-01 19:50:01,218 [regionserver60020] INFO
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
>> Closed zookeeper sessionid=0x137ec64373dd4b3
>> 2012-10-01 19:50:01,270
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,24294294,1349027918385.068e6f4f7b8a81fb21e49fe3ac47f262.
>> 2012-10-01 19:50:01,271
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96510144,1348960969795.fe2a133a17d09a65a6b0d4fb60e6e051.
>> 2012-10-01 19:50:01,272
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56499174,1349027424070.7f767ca333bef3dcdacc9a6c673a8350.
>> 2012-10-01 19:50:01,273
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96515494,1348960969795.8ab4e1d9f4e4c388f3f4f18eec637e8a.
>> 2012-10-01 19:50:01,273
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,98395724,1348969940123.08188cc246bf752c17cfe57f99970924.
>> 2012-10-01 19:50:01,274
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
>> 2012-10-01 19:50:01,275
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56604984,1348940650040.14639a082062e98abfea8ae3fff5d2c7.
>> 2012-10-01 19:50:01,275
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56880144,1348969971950.ece85a086a310aacc2da259a3303e67e.
>> 2012-10-01 19:50:01,276
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
>> 2012-10-01 19:50:01,277
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,31267284,1348961229728.fc429276c44f5c274f00168f12128bad.
>> 2012-10-01 19:50:01,278
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56569824,1348940809479.9808dac5b895fc9b8f9892c4b72b3804.
>> 2012-10-01 19:50:01,279
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56425354,1349031095620.e4965f2e57729ff9537986da3e19258c.
>> 2012-10-01 19:50:01,280
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96504305,1348964001164.77f75cf8ba76ebc4417d49f019317d0a.
>> 2012-10-01 19:50:01,280
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,60743825,1348962513777.f377f704db5f0d000e36003338e017b1.
>> 2012-10-01 19:50:01,283
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,09603014,1349026790546.d634bfe659bdf2f45ec89e53d2d38791.
>> 2012-10-01 19:50:01,283
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,31274021,1348961229728.e93382b458a84c22f2e5aeb9efa737b5.
>> 2012-10-01 19:50:01,285
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56462454,1348982699951.a2dafbd054bf65aa6f558dc9a2d839a1.
>> 2012-10-01 19:50:01,286
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> Orwell,48814673,1348270987327.29818ea19d62126d5616a7ba7d7dae21.
>> 2012-10-01 19:50:01,288
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56610954,1348940650040.3609c1bfc2be6936577b6be493e7e8d9.
>> 2012-10-01 19:50:01,289
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
>> 2012-10-01 19:50:01,289
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,05205763,1348941089603.957ea0e428ba6ff21174ecdda96f9fdc.
>> 2012-10-01 19:50:01,289
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56349615,1348941138879.dfabbd25c59fd6c34a58d9eacf4c096f.
>> 2012-10-01 19:50:01,292
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56503505,1349027424070.129160a78f13c17cc9ea16ff3757cda9.
>> 2012-10-01 19:50:01,292
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,91248264,1348942310344.a93982b8f91f260814885bc0afb4fbb9.
>> 2012-10-01 19:50:01,293
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,98646724,1348980566403.a4f2a16d1278ad1246068646c4886502.
>> 2012-10-01 19:50:01,293
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56454594,1348982903997.7107c6a1b2117fb59f68210ce82f2cc9.
>> 2012-10-01 19:50:01,294
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56564144,1348940809479.636092bb3ec2615b115257080427d091.
>> 2012-10-01 19:50:01,295
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_user_events,06252594,1348582793143.499f0a0f4704afa873c83f141f5e0324.
>> 2012-10-01 19:50:01,296
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56617164,1348941287729.3992a80a6648ab62753b4998331dcfdf.
>> 2012-10-01 19:50:01,296
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,98390944,1348969940123.af160e450632411818fa8d01b2c2ed0b.
>> 2012-10-01 19:50:01,297
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56703743,1348941223663.5cc2fcb82080dbf14956466c31f1d27c.
>> 2012-10-01 19:50:01,297
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
>> 2012-10-01 19:50:01,298
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56693584,1348942631318.f01b179c1fad1f18b97b37fc8f730898.
>> 2012-10-01 19:50:01,299
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_user_events,12140615,1348582250428.7822f7f5ceea852b04b586fdf34debff.
>> 2012-10-01 19:50:01,300
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
>> 2012-10-01 19:50:01,300
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96420705,1348942597601.a063e06eb840ee49bb88474ee8e22160.
>> 2012-10-01 19:50:01,300
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
>> 2012-10-01 19:50:01,300
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96432674,1348961425148.1a793cf2137b9599193a1e2d5d9749c5.
>> 2012-10-01 19:50:01,302
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
>> 2012-10-01 19:50:01,303
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,44371574,1348961840615.00f5b4710a43f2ee75d324bebb054323.
>> 2012-10-01 19:50:01,304
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,562fc921,1348941189517.cff261c585416844113f232960c8d6b4.
>> 2012-10-01 19:50:01,304
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56323831,1348941216581.0b0f3bdb03ce9e4f58156a4143018e0e.
>> 2012-10-01 19:50:01,305
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56480194,1349028080664.03a7046ffcec7e1f19cdb2f9890a353e.
>> 2012-10-01 19:50:01,306
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56418294,1348940288044.c872be05981c047e8c1ee4765b92a74d.
>> 2012-10-01 19:50:01,306
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,53590305,1348940776419.4c98d7846622f2d8dad4e998dae81d2b.
>> 2012-10-01 19:50:01,307
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96445963,1348942353563.66a0f602720191bf21a1dfd12eec4a35.
>> 2012-10-01 19:50:01,307
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
>> 2012-10-01 19:50:01,307
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56305294,1348941189517.20f67941294c259e2273d3e0b7ae5198.
>> 2012-10-01 19:50:01,308
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56516115,1348981132325.0f753cb87c1163d95d9d10077d6308db.
>> 2012-10-01 19:50:01,309
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56796924,1348941269761.843e0aee0b15d67b810c7b3fe5a2dda7.
>> 2012-10-01 19:50:01,309
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56440004,1348941150045.7033cb81a66e405d7bf45cd55ab010e3.
>> 2012-10-01 19:50:01,309
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56317864,1348941124299.0de45283aa626fc83b2c026e1dd8bfec.
>> 2012-10-01 19:50:01,310
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56809673,1348941834500.08244d4ed5f7fdf6d9ac9c73fbfd3947.
>> 2012-10-01 19:50:01,310
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56894864,1348970959541.fc19a6ffe18f29203369d32ad1b102ce.
>> 2012-10-01 19:50:01,311
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56382491,1348940876960.2392137bf0f4cb695c08c0fb22ce5294.
>> 2012-10-01 19:50:01,312
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,95128264,1349026585563.5dc569af8afe0a84006b80612c15007f.
>> 2012-10-01 19:50:01,312
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,5631146,1348941124299.b7c10be9855b5e8ba3a76852920627f9.
>> 2012-10-01 19:50:01,312
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56710424,1348940462668.a370c149c232ebf4427e070eb28079bc.
>> 2012-10-01 19:50:01,314 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Session: 0x137ec64373dd4b3 closed
>> 2012-10-01 19:50:01,314 [regionserver60020-EventThread] INFO
>> org.apache.zookeeper.ClientCnxn: EventThread shut down
>> 2012-10-01 19:50:01,314 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 78
>> regions to close
>> 2012-10-01 19:50:01,317
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96497834,1348964001164.0b12f37b74b2124ef9f27d1ef0ebb17a.
>> 2012-10-01 19:50:01,318
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56507574,1349027965795.79113c51d318a11286b39397ebbfdf04.
>> 2012-10-01 19:50:01,319
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,24297525,1349027918385.047533f3d801709a26c895a01dcc1a73.
>> 2012-10-01 19:50:01,320
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96439694,1348961425148.038e0e43a6e56760e4daae6f34bfc607.
>> 2012-10-01 19:50:01,320
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,82811715,1348904784424.88fae4279f9806bef745d90f7ad37241.
>> 2012-10-01 19:50:01,321
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56699434,1348941223663.ef3ccf0af60ee87450806b393f89cb6e.
>> 2012-10-01 19:50:01,321
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
>> 2012-10-01 19:50:01,322
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
>> 2012-10-01 19:50:01,322
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
>> 2012-10-01 19:50:01,323
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56465563,1348982699951.f34a29c0c4fc32e753d12db996ccc995.
>> 2012-10-01 19:50:01,324
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56450734,1349027937173.c70110b3573a48299853117c4287c7be.
>> 2012-10-01 19:50:01,325
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56361984,1349029457686.6c8d6974741e59df971da91c7355de1c.
>> 2012-10-01 19:50:01,327
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56814705,1348962077056.69fd74167a3c5c2961e45d339b962ca9.
>> 2012-10-01 19:50:01,327
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,00389105,1348978080963.6463149a16179d4e44c19bb49e4b4a81.
>> 2012-10-01 19:50:01,329
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56558944,1348940893836.03bd1c0532949ec115ca8d5215dbb22f.
>> 2012-10-01 19:50:01,330 [IPC Server handler 59 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@112ba2bf,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00392783"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.135:34935: output error
>> 2012-10-01 19:50:01,330
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,5658955,1349027142822.e65d0c1f452cb41d47ad08560c653607.
>> 2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:01,331
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56402364,1349049689267.27b452f3bcce0815b7bf92370cbb51de.
>> 2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
>> exiting
>> 2012-10-01 19:50:01,332
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96426544,1348942597601.addf704f99dd1b2e07b3eff505e2c811.
>> 2012-10-01 19:50:01,333
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,60414161,1348962852909.c6b1b21f00bbeef8648c4b9b3d28b49a.
>> 2012-10-01 19:50:01,333
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56552794,1348940893836.5314886f88f6576e127757faa25cef7c.
>> 2012-10-01 19:50:01,335
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56910924,1348962040261.fdedae86206fc091a72dde52a3d0d0b4.
>> 2012-10-01 19:50:01,335
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56720084,1349029064698.ee5cb00ab358be0d2d36c59189da32f8.
>> 2012-10-01 19:50:01,336
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56624533,1348941287729.6121fce2c31d4754b4ad4e855d85b501.
>> 2012-10-01 19:50:01,336
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56899934,1348970959541.f34f01dd65e293cb6ab13de17ac91eec.
>> 2012-10-01 19:50:01,337
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
>> 2012-10-01 19:50:01,337
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56405923,1349049689267.bb4be5396608abeff803400cdd2408f4.
>> 2012-10-01 19:50:01,338
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56364924,1349029457686.1e1c09b6eb734d8ad48ea0b4fa103381.
>> 2012-10-01 19:50:01,339
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56784073,1348961864297.f01eaf712e59a0bca989ced951caf4f1.
>> 2012-10-01 19:50:01,340
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56594534,1349027142822.8e67bb85f4906d579d4d278d55efce0b.
>> 2012-10-01 19:50:01,340
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
>> 2012-10-01 19:50:01,340
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56491525,1349027928183.7bbfb4d39ef4332e17845001191a6ad4.
>> 2012-10-01 19:50:01,341
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,07123624,1348959804638.c114ec80c6693a284741e220da028736.
>> 2012-10-01 19:50:01,342
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
>> 2012-10-01 19:50:01,342
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56546534,1348941049708.bde2614732f938db04fdd81ed6dbfcf2.
>> 2012-10-01 19:50:01,343
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,569054,1348962040261.a7942d7837cd57b68d156d2ce7e3bd5f.
>> 2012-10-01 19:50:01,343
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56576714,1348982931576.3dd5bf244fb116cf2b6f812fcc39ad2d.
>> 2012-10-01 19:50:01,344
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,5689007,1348963034009.c4b16ea4d8dbc66c301e67d8e58a7e48.
>> 2012-10-01 19:50:01,344
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56410784,1349027912141.6de7be1745c329cf9680ad15e9bde594.
>> 2012-10-01 19:50:01,345
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
>> 2012-10-01 19:50:01,345
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96457954,1348964300132.674a03f0c9866968aabd70ab38a482c0.
>> 2012-10-01 19:50:01,346
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56483084,1349027988535.de732d7e63ea53331b80255f51fc1a86.
>> 2012-10-01 19:50:01,347
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56790484,1348941269761.5bcc58c48351de449cc17307ab4bf777.
>> 2012-10-01 19:50:01,348
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56458293,1348982903997.4f67e6f4949a2ef7f4903f78f54c474e.
>> 2012-10-01 19:50:01,348
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,95123235,1349026585563.a359eb4cb88d34a529804e50a5affa24.
>> 2012-10-01 19:50:01,349
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
>> 2012-10-01 19:50:01,350
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56368484,1348941099873.cef2729093a0d7d72b71fac1b25c0a40.
>> 2012-10-01 19:50:01,350
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,17499894,1349026916228.630196a553f73069b9e568e6912ef0c5.
>> 2012-10-01 19:50:01,351
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56375315,1348940876960.40cf6dfa370ce7f1fc6c1a59ba2f2191.
>> 2012-10-01 19:50:01,351
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,95512574,1349009451986.e4d292eb66d16c21ef8ae32254334850.
>> 2012-10-01 19:50:01,352
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
>> 2012-10-01 19:50:01,352
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
>> 2012-10-01 19:50:01,353
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56432705,1348941150045.07aa626f3703c7b4deaba1263c71894d.
>> 2012-10-01 19:50:01,353
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,13118725,1349026772953.c0be859d4a4dc2246d764a8aad58fe88.
>> 2012-10-01 19:50:01,354
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56520814,1348981132325.c2f16fd16f83aa51769abedfe8968bb6.
>> 2012-10-01 19:50:01,354
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
>> 2012-10-01 19:50:01,355
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56884434,1348963034009.616835869c81659a27eab896f48ae4e1.
>> 2012-10-01 19:50:01,355
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56476541,1349028080664.341392a325646f24a3d8b8cd27ebda19.
>> 2012-10-01 19:50:01,357
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56803462,1348941834500.6313b36f1949381d01df977a182e6140.
>> 2012-10-01 19:50:01,357
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96464524,1348964300132.7a15f1e8e28f713212c516777267c2bf.
>> 2012-10-01 19:50:01,358
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56875074,1348969971950.3e408e7cb32c9213d184e10bf42837ad.
>> 2012-10-01 19:50:01,359
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,42862354,1348981565262.7ad46818060be413140cdcc11312119d.
>> 2012-10-01 19:50:01,359
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56582264,1349028973106.b481b61be387a041a3f259069d5013a6.
>> 2012-10-01 19:50:01,360
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56579105,1348982931576.1561a22c16263dccb8be07c654b43f2f.
>> 2012-10-01 19:50:01,360
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56723415,1348946404223.38d992d687ad8925810be4220a732b13.
>> 2012-10-01 19:50:01,361
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,4285921,1348981565262.7a2cbd8452b9e406eaf1a5ebff64855a.
>> 2012-10-01 19:50:01,362
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56336394,1348941231573.ca52393a2eabae00a64f65c0b657b95a.
>> 2012-10-01 19:50:01,363
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,96452715,1348942353563.876edfc6e978879aac42bfc905a09c26.
>> 2012-10-01 19:50:01,363
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
>> 2012-10-01 19:50:01,364
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56525625,1348941298909.ccf16ed8e761765d2989343c7670e94f.
>> 2012-10-01 19:50:01,365
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,97578484,1348938848996.98ecacc61ae4c5b3f7a3de64bec0e026.
>> 2012-10-01 19:50:01,365
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56779025,1348961864297.cc13f0a6f5e632508f2e28a174ef1488.
>> 2012-10-01 19:50:01,366
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
>> 2012-10-01 19:50:01,366
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_user_events,43323443,1348591057882.8b0ab02c33f275114d89088345f58885.
>> 2012-10-01 19:50:01,367
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
>> 2012-10-01 19:50:01,367
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,56686234,1348942631318.69270cd5013f8ca984424e508878e428.
>> 2012-10-01 19:50:01,368
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,98642625,1348980566403.2277d2ef1d53d40d41cd23846619a3f8.
>> 2012-10-01 19:50:01,524 [IPC Server handler 57 on 60020] INFO
>> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
>> blk_3201413024070455305_51616611 from any node: java.io.IOException:
>> No live nodes contain current block. Will get new block locations from
>> namenode and retry...
>> 2012-10-01 19:50:02,462 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 2
>> regions to close
>> 2012-10-01 19:50:02,462 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>> 2012-10-01 19:50:02,462 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>> 10.100.101.156:50010
>> 2012-10-01 19:50:02,495 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
>> primary datanode 10.100.102.122:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:02,496 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.122:50010 failed 3 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>> retry...
>> 2012-10-01 19:50:02,686 [IPC Server handler 46 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@504b62c6,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320404"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.172:53925: output error
>> 2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
>> exiting
>> 2012-10-01 19:50:02,809 [IPC Server handler 55 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@45f1c31e,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322424"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.178:35016: output error
>> 2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
>> exiting
>> 2012-10-01 19:50:03,496 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>> 2012-10-01 19:50:03,496 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>> 10.100.101.156:50010
>> 2012-10-01 19:50:03,510 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
>> primary datanode 10.100.102.122:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:03,510 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.122:50010 failed 4 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>> retry...
>> 2012-10-01 19:50:05,299 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>> 2012-10-01 19:50:05,299 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>> 10.100.101.156:50010
>> 2012-10-01 19:50:05,314 [IPC Server handler 3 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@472aa9fe,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321694"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.176:42371: output error
>> 2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
>> exiting
>> 2012-10-01 19:50:05,329 [IPC Server handler 16 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@42987a12,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320293"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.135:35132: output error
>> 2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
>> exiting
>> 2012-10-01 19:50:05,638 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
>> primary datanode 10.100.102.122:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:05,638 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.122:50010 failed 5 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
>> retry...
>> 2012-10-01 19:50:05,641 [IPC Server handler 26 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@a9c09e8,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319505"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.183:60078: output error
>> 2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
>> exiting
>> 2012-10-01 19:50:05,664 [IPC Server handler 57 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@349d7b4,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319915"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.141:58290: output error
>> 2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
>> exiting
>> 2012-10-01 19:50:07,063 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
>> 2012-10-01 19:50:07,063 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
>> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
>> 10.100.101.156:50010
>> 2012-10-01 19:50:07,076 [IPC Server handler 23 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@5ba03734,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319654"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.161:43227: output error
>> 2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
>> exiting
>> 2012-10-01 19:50:07,089 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
>> primary datanode 10.100.102.122:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:07,090 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.122:50010 failed 6 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010.
>> Marking primary datanode as bad.
>> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@3d19e607,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319564"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.82:42779: output error
>> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
>> exiting
>> 2012-10-01 19:50:07,181
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
>> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@5920511b,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322014"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.88:49489: output error
>> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
>> exiting
>> 2012-10-01 19:50:08,064 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 1
>> regions to close
>> 2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
>> org.apache.hadoop.hbase.regionserver.Leases:
>> regionserver60020.leaseChecker closing leases
>> 2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
>> org.apache.hadoop.hbase.regionserver.Leases:
>> regionserver60020.leaseChecker closed leases
>> 2012-10-01 19:50:08,508 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
>> primary datanode 10.100.101.156:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:08,508 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.101.156:50010 failed 1 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>> 2012-10-01 19:50:09,652 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
>> primary datanode 10.100.101.156:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:09,653 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.101.156:50010 failed 2 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>> 2012-10-01 19:50:10,697 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
>> primary datanode 10.100.101.156:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:10,697 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.101.156:50010 failed 3 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>> 2012-10-01 19:50:12,278 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
>> primary datanode 10.100.101.156:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:12,279 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.101.156:50010 failed 4 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>> 2012-10-01 19:50:13,294 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
>> primary datanode 10.100.101.156:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:13,294 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.101.156:50010 failed 5 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
>> 2012-10-01 19:50:14,306 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
>> primary datanode 10.100.101.156:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:14,306 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.101.156:50010 failed 6 times.  Pipeline was
>> 10.100.101.156:50010, 10.100.102.88:50010. Marking primary datanode as
>> bad.
>> 2012-10-01 19:50:15,317 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
>> primary datanode 10.100.102.88:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:15,318 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 1 times.  Pipeline was
>> 10.100.102.88:50010. Will retry...
>> 2012-10-01 19:50:16,375 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
>> primary datanode 10.100.102.88:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:16,376 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 2 times.  Pipeline was
>> 10.100.102.88:50010. Will retry...
>> 2012-10-01 19:50:17,385 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
>> primary datanode 10.100.102.88:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:17,385 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 3 times.  Pipeline was
>> 10.100.102.88:50010. Will retry...
>> 2012-10-01 19:50:18,395 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
>> primary datanode 10.100.102.88:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:18,395 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 4 times.  Pipeline was
>> 10.100.102.88:50010. Will retry...
>> 2012-10-01 19:50:19,404 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
>> primary datanode 10.100.102.88:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:19,405 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 5 times.  Pipeline was
>> 10.100.102.88:50010. Will retry...
>> 2012-10-01 19:50:20,414 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
>> primary datanode 10.100.102.88:50010
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> blk_5535637699691880681_51616301 is already commited, storedBlock ==
>> null.
>>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy4.nextGenerationStamp(Unknown Source)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>>    at java.security.AccessController.doPrivileged(Native Method)
>>    at javax.security.auth.Subject.doAs(Subject.java:396)
>>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>>
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy14.recoverBlock(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:20,415 [DataStreamer for file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> block blk_5535637699691880681_51616301] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>> 2012-10-01 19:50:20,415 [IPC Server handler 58 on 60020] ERROR
>> org.apache.hadoop.hdfs.DFSClient: Exception closing file
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
>> : java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>> java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:20,415 [IPC Server handler 69 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>> java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:20,415 [regionserver60020.logSyncer] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>> java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>> java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] INFO
>> org.apache.hadoop.fs.FileSystem: Could not cancel cleanup thread,
>> though no FileSystems are open
>> 2012-10-01 19:50:20,417 [regionserver60020.logSyncer] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>> java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:20,417 [regionserver60020.logSyncer] FATAL
>> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
>> Requesting close of hlog
>> java.io.IOException: Reflection
>>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>    at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.lang.reflect.InvocationTargetException
>>    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>    ... 4 more
>> Caused by: java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:20,418 [regionserver60020.logSyncer] ERROR
>> org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
>> requesting close of hlog
>> java.io.IOException: Reflection
>>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>    at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.lang.reflect.InvocationTargetException
>>    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>    ... 4 more
>> Caused by: java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
>> Requesting close of hlog
>> java.io.IOException: Reflection
>>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog.append(HLog.java:1033)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:1852)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1723)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3076)
>>    at sun.reflect.GeneratedMethodAccessor96.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.lang.reflect.InvocationTargetException
>>    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>    ... 11 more
>> Caused by: java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:20,417 [IPC Server handler 29 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>> System not available
>> java.io.IOException: File system is not available
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: java.lang.InterruptedException
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    ... 9 more
>> Caused by: java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    ... 21 more
>> 2012-10-01 19:50:20,417 [IPC Server handler 24 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>> System not available
>> java.io.IOException: File system is not available
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: java.lang.InterruptedException
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    ... 9 more
>> Caused by: java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    ... 21 more
>> 2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>> abort: loaded coprocessors are: []
>> 2012-10-01 19:50:20,417 [IPC Server handler 1 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>> System not available
>> java.io.IOException: File system is not available
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: java.lang.InterruptedException
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    ... 9 more
>> Caused by: java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    ... 21 more
>> 2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>> abort: loaded coprocessors are: []
>> 2012-10-01 19:50:20,417 [IPC Server handler 25 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>> System not available
>> java.io.IOException: File system is not available
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: java.lang.InterruptedException
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    ... 9 more
>> Caused by: java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    ... 21 more
>> 2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>> abort: loaded coprocessors are: []
>> 2012-10-01 19:50:20,417 [IPC Server handler 90 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>> System not available
>> java.io.IOException: File system is not available
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: java.lang.InterruptedException
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    ... 9 more
>> Caused by: java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    ... 21 more
>> 2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>> abort: loaded coprocessors are: []
>> 2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>> System not available
>> java.io.IOException: File system is not available
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: java.lang.InterruptedException
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    ... 9 more
>> Caused by: java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    ... 21 more
>> 2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>> abort: loaded coprocessors are: []
>> 2012-10-01 19:50:20,417 [IPC Server handler 17 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
>> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
>> System not available
>> java.io.IOException: File system is not available
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: java.lang.InterruptedException
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>    at $Proxy7.getFileInfo(Unknown Source)
>>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>>    ... 9 more
>> Caused by: java.lang.InterruptedException
>>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>>    ... 21 more
>> 2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>> abort: loaded coprocessors are: []
>> 2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>> readRequestsCount=6744201, writeRequestsCount=904280,
>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>> blockCacheCount=5435, blockCacheHitCount=321294212,
>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>> hdfsBlocksLocalityIndex=97
>> 2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>> readRequestsCount=6744201, writeRequestsCount=904280,
>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>> blockCacheCount=5435, blockCacheHitCount=321294212,
>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>> hdfsBlocksLocalityIndex=97
>> 2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>> readRequestsCount=6744201, writeRequestsCount=904280,
>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>> blockCacheCount=5435, blockCacheHitCount=321294212,
>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>> hdfsBlocksLocalityIndex=97
>> 2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>> readRequestsCount=6744201, writeRequestsCount=904280,
>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>> blockCacheCount=5435, blockCacheHitCount=321294212,
>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>> hdfsBlocksLocalityIndex=97
>> 2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>> readRequestsCount=6744201, writeRequestsCount=904280,
>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>> blockCacheCount=5435, blockCacheHitCount=321294212,
>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>> hdfsBlocksLocalityIndex=97
>> 2012-10-01 19:50:20,420 [IPC Server handler 69 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: (responseTooSlow):
>> {"processingtimems":22039,"call":"multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb),
>> rpc version=1, client version=29,
>> methodsFingerPrint=54742778","client":"10.100.102.155:39852","starttimems":1349120998380,"queuetimems":0,"class":"HRegionServer","responsesize":0,"method":"multi"}
>> 2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>> readRequestsCount=6744201, writeRequestsCount=904280,
>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1575,
>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>> blockCacheCount=5435, blockCacheHitCount=321294212,
>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>> hdfsBlocksLocalityIndex=97
>> 2012-10-01 19:50:20,420
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING
>> region server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
>> Unrecoverable exception while closing region
>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
>> still finishing close
>> java.io.IOException: Filesystem closed
>>    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
>>    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
>>    at java.io.FilterInputStream.close(FilterInputStream.java:155)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
>>    at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
>>    at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
>>    at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
>>    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
>>    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>    at java.lang.Thread.run(Thread.java:662)
>> 2012-10-01 19:50:20,426
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>> abort: loaded coprocessors are: []
>> 2012-10-01 19:50:20,419 [IPC Server handler 29 on 60020] FATAL
>> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
>> abort: loaded coprocessors are: []
>> 2012-10-01 19:50:20,426
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
>> metrics: requestsPerSecond=0, numberOfOnlineRegions=136,
>> numberOfStores=136, numberOfStorefiles=189, storefileIndexSizeMB=15,
>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>> readRequestsCount=6744201, writeRequestsCount=904280,
>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>> blockCacheCount=5435, blockCacheHitCount=321294212,
>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>> hdfsBlocksLocalityIndex=97
>> 2012-10-01 19:50:20,426 [IPC Server handler 29 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
>> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
>> numberOfStorefiles=189, storefileIndexSizeMB=15,
>> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
>> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
>> readRequestsCount=6744201, writeRequestsCount=904280,
>> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
>> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
>> blockCacheCount=5435, blockCacheHitCount=321294212,
>> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
>> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
>> hdfsBlocksLocalityIndex=97
>> 2012-10-01 19:50:20,445 [IPC Server handler 58 on 60020] WARN
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>> fatal error to master
>> java.lang.reflect.UndeclaredThrowableException
>>    at $Proxy8.reportRSFatalError(Unknown Source)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Call to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>> local exception: java.nio.channels.ClosedChannelException
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>    ... 11 more
>> Caused by: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>> 2012-10-01 19:50:20,446 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>> fatal error to master
>> java.lang.reflect.UndeclaredThrowableException
>>    at $Proxy8.reportRSFatalError(Unknown Source)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Call to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>> local exception: java.nio.channels.ClosedByInterruptException
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>    ... 11 more
>> Caused by: java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>>    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>>    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>>    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>>    at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
>>    ... 12 more
>> 2012-10-01 19:50:20,447 [IPC Server handler 29 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>> System not available
>> 2012-10-01 19:50:20,446 [IPC Server handler 58 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>> System not available
>> 2012-10-01 19:50:20,446 [IPC Server handler 17 on 60020] WARN
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>> fatal error to master
>> java.lang.reflect.UndeclaredThrowableException
>>    at $Proxy8.reportRSFatalError(Unknown Source)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:328)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:362)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1045)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:897)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>    ... 11 more
>> 2012-10-01 19:50:20,448 [IPC Server handler 17 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>> System not available
>> 2012-10-01 19:50:20,445 [IPC Server handler 1 on 60020] WARN
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>> fatal error to master
>> java.lang.reflect.UndeclaredThrowableException
>>    at $Proxy8.reportRSFatalError(Unknown Source)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Call to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>> local exception: java.nio.channels.ClosedChannelException
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>    ... 11 more
>> Caused by: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>> 2012-10-01 19:50:20,448 [IPC Server handler 1 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>> System not available
>> 2012-10-01 19:50:20,445
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to
>> report fatal error to master
>> java.lang.reflect.UndeclaredThrowableException
>>    at $Proxy8.reportRSFatalError(Unknown Source)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:131)
>>    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>    at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.io.IOException: Call to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>> local exception: java.nio.channels.ClosedByInterruptException
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>    ... 7 more
>> Caused by: java.nio.channels.ClosedByInterruptException
>>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>>    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>>    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>>    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>>    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>>    at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>    at $Proxy8.reportRSFatalError(Unknown Source)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> 2012-10-01 19:50:20,450
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
>> Unrecoverable exception while closing region
>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
>> still finishing close
>> 2012-10-01 19:50:20,445 [IPC Server handler 69 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb), rpc
>> version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.155:39852: output error
>> 2012-10-01 19:50:20,445 [IPC Server handler 24 on 60020] WARN
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>> fatal error to master
>> java.lang.reflect.UndeclaredThrowableException
>>    at $Proxy8.reportRSFatalError(Unknown Source)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Call to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>> local exception: java.nio.channels.ClosedChannelException
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>    ... 11 more
>> Caused by: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>> 2012-10-01 19:50:20,451 [IPC Server handler 24 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>> System not available
>> 2012-10-01 19:50:20,445 [IPC Server handler 90 on 60020] WARN
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>> fatal error to master
>> java.lang.reflect.UndeclaredThrowableException
>>    at $Proxy8.reportRSFatalError(Unknown Source)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Call to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>> local exception: java.nio.channels.ClosedChannelException
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>    ... 11 more
>> Caused by: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>> 2012-10-01 19:50:20,451 [IPC Server handler 90 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>> System not available
>> 2012-10-01 19:50:20,445 [IPC Server handler 25 on 60020] WARN
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
>> fatal error to master
>> java.lang.reflect.UndeclaredThrowableException
>>    at $Proxy8.reportRSFatalError(Unknown Source)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
>> Caused by: java.io.IOException: Call to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
>> local exception: java.nio.channels.ClosedChannelException
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>    ... 11 more
>> Caused by: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>>    at java.io.FilterInputStream.read(FilterInputStream.java:116)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>> 2012-10-01 19:50:20,452 [IPC Server handler 25 on 60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
>> System not available
>> 2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@5d72e577,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321312"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.184:34111: output error
>> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@2237178f,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316983"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.188:59581: output error
>> 2012-10-01 19:50:20,450 [IPC Server handler 69 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
>> exiting
>> 2012-10-01 19:50:20,450
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
>> ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable
>> while processing event M_RS_CLOSE_REGION
>> java.lang.RuntimeException: java.io.IOException: Filesystem closed
>>    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:133)
>>    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>    at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.io.IOException: Filesystem closed
>>    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
>>    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
>>    at java.io.FilterInputStream.close(FilterInputStream.java:155)
>>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
>>    at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
>>    at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
>>    at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
>>    at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
>>    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
>>    ... 4 more
>> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@573dba6d,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"0032027"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.183:60076: output error
>> 2012-10-01 19:50:20,452 [IPC Server handler 69 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
>> exiting
>> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@4eebbed5,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317054"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.146:40240: output error
>> 2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:20,453 [IPC Server handler 29 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
>> exiting
>> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
>> exiting
>> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
>> exiting
>> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@4ff0ed4a,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00318964"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.172:53924: output error
>> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
>> exiting
>> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@526abe46,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316914"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.101.184:34110: output error
>> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
>> exiting
>> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
>> get([B@5df20fef,
>> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319173"}),
>> rpc version=1, client version=29, methodsFingerPrint=54742778 from
>> 10.100.102.146:40243: output error
>> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020
>> caught: java.nio.channels.ClosedChannelException
>>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
>>
>> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
>> exiting
>> 2012-10-01 19:50:21,066
>> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
>> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
>> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
>> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>> java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
>> org.apache.hadoop.hdfs.DFSClient: Error while syncing
>> java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] FATAL
>> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
>> Requesting close of hlog
>> java.io.IOException: Reflection
>>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>    at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.lang.reflect.InvocationTargetException
>>    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>    ... 4 more
>> Caused by: java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:21,419 [regionserver60020.logSyncer] ERROR
>> org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
>> requesting close of hlog
>> java.io.IOException: Reflection
>>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>>    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>>    at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.lang.reflect.InvocationTargetException
>>    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>    at java.lang.reflect.Method.invoke(Method.java:597)
>>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>>    ... 4 more
>> Caused by: java.io.IOException: Error Recovery for block
>> blk_5535637699691880681_51616301 failed  because recovery from primary
>> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
>> 10.100.102.88:50010. Aborting...
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
>> 2012-10-01 19:50:22,066 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; all regions
>> closed.
>> 2012-10-01 19:50:22,066 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closing
>> leases
>> 2012-10-01 19:50:22,066 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closed
>> leases
>> 2012-10-01 19:50:22,082 [regionserver60020] WARN
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed deleting my
>> ephemeral node
>> org.apache.zookeeper.KeeperException$SessionExpiredException:
>> KeeperErrorCode = Session expired for
>> /hbase/rs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
>>    at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
>>    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>>    at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:868)
>>    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:107)
>>    at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:962)
>>    at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:951)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:964)
>>    at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:762)
>>    at java.lang.Thread.run(Thread.java:662)
>> 2012-10-01 19:50:22,082 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
>> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; zookeeper
>> connection closed.
>> 2012-10-01 19:50:22,082 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: regionserver60020
>> exiting
>> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
>> starting; hbase.shutdown.hook=true;
>> fsShutdownHook=Thread[Thread-5,5,main]
>> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown
>> hook
>> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs
>> shutdown hook thread.
>> 2012-10-01 19:50:22,124 [Shutdownhook:regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
>> finished.
>> Mon Oct  1 19:54:10 UTC 2012 Starting regionserver on
>> data3024.ngpipes.milp.ngmoco.com
>> core file size          (blocks, -c) 0
>> data seg size           (kbytes, -d) unlimited
>> scheduling priority             (-e) 20
>> file size               (blocks, -f) unlimited
>> pending signals                 (-i) 16382
>> max locked memory       (kbytes, -l) 64
>> max memory size         (kbytes, -m) unlimited
>> open files                      (-n) 32768
>> pipe size            (512 bytes, -p) 8
>> POSIX message queues     (bytes, -q) 819200
>> real-time priority              (-r) 0
>> stack size              (kbytes, -s) 8192
>> cpu time               (seconds, -t) unlimited
>> max user processes              (-u) unlimited
>> virtual memory          (kbytes, -v) unlimited
>> file locks                      (-x) unlimited
>> 2012-10-01 19:54:11,355 [main] INFO
>> org.apache.hadoop.hbase.util.VersionInfo: HBase 0.92.1
>> 2012-10-01 19:54:11,356 [main] INFO
>> org.apache.hadoop.hbase.util.VersionInfo: Subversion
>> https://svn.apache.org/repos/asf/hbase/branches/0.92 -r 1298924
>> 2012-10-01 19:54:11,356 [main] INFO
>> org.apache.hadoop.hbase.util.VersionInfo: Compiled by jenkins on Fri
>> Mar  9 16:58:34 UTC 2012
>> 2012-10-01 19:54:11,513 [main] INFO
>> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java
>> HotSpot(TM) 64-Bit Server VM, vmVendor=Sun Microsystems Inc.,
>> vmVersion=20.1-b02
>> 2012-10-01 19:54:11,513 [main] INFO
>> org.apache.hadoop.hbase.util.ServerCommandLine:
>> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx4000m,
>> -XX:NewSize=128m, -XX:MaxNewSize=128m,
>> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
>> -XX:CMSInitiatingOccupancyFraction=75, -verbose:gc,
>> -XX:+PrintGCDetails, -XX:+PrintGCTimeStamps,
>> -Xloggc:/data2/hbase_log/gc-hbase.log,
>> -Dcom.sun.management.jmxremote.authenticate=true,
>> -Dcom.sun.management.jmxremote.ssl=false,
>> -Dcom.sun.management.jmxremote.password.file=/home/hadoop/hadoop/conf/jmxremote.password,
>> -Dcom.sun.management.jmxremote.port=8010,
>> -Dhbase.log.dir=/data2/hbase_log,
>> -Dhbase.log.file=hbase-hadoop-regionserver-data3024.ngpipes.milp.ngmoco.com.log,
>> -Dhbase.home.dir=/home/hadoop/hbase, -Dhbase.id.str=hadoop,
>> -Dhbase.root.logger=INFO,DRFA,
>> -Djava.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64]
>> 2012-10-01 19:54:11,964 [IPC Reader 0 on port 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>> 2012-10-01 19:54:11,967 [IPC Reader 1 on port 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>> 2012-10-01 19:54:11,970 [IPC Reader 2 on port 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>> 2012-10-01 19:54:11,973 [IPC Reader 3 on port 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>> 2012-10-01 19:54:11,976 [IPC Reader 4 on port 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>> 2012-10-01 19:54:11,979 [IPC Reader 5 on port 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>> 2012-10-01 19:54:11,982 [IPC Reader 6 on port 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>> 2012-10-01 19:54:11,985 [IPC Reader 7 on port 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>> 2012-10-01 19:54:11,988 [IPC Reader 8 on port 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>> 2012-10-01 19:54:11,991 [IPC Reader 9 on port 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
>> 2012-10-01 19:54:12,002 [main] INFO
>> org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics
>> with hostName=HRegionServer, port=60020
>> 2012-10-01 19:54:12,081 [main] INFO
>> org.apache.hadoop.hbase.io.hfile.CacheConfig: Allocating LruBlockCache
>> with maximum size 996.8m
>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client
>> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48
>> GMT
>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client
>> environment:host.name=data3024.ngpipes.milp.ngmoco.com
>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client
>> environment:java.version=1.6.0_26
>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun
>> Microsystems Inc.
>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client
>> environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre
>> 2012-10-01 19:54:12,221 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client
>> environment:java.class.path=/home/hadoop/hbase/conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase:/home/hadoop/hbase/hbase-0.92.1.jar:/home/hadoop/hbase/hbase-0.92.1-tests.jar:/home/hadoop/hbase/lib/activation-1.1.jar:/home/hadoop/hbase/lib/asm-3.1.jar:/home/hadoop/hbase/lib/avro-1.5.3.jar:/home/hadoop/hbase/lib/avro-ipc-1.5.3.jar:/home/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hbase/lib/commons-cli-1.2.jar:/home/hadoop/hbase/lib/commons-codec-1.4.jar:/home/hadoop/hbase/lib/commons-collections-3.2.1.jar:/home/hadoop/hbase/lib/commons-configuration-1.6.jar:/home/hadoop/hbase/lib/commons-digester-1.8.jar:/home/hadoop/hbase/lib/commons-el-1.0.jar:/home/hadoop/hbase/lib/commons-httpclient-3.1.jar:/home/hadoop/hbase/lib/commons-lang-2.5.jar:/home/hadoop/hbase/lib/commons-logging-1.1.1.jar:/home/hadoop/hbase/lib/commons-math-2.1.jar:/home/hadoop/hbase/lib/commons-net-1.4.1.jar:/home/hadoop/hbase/lib/core-3.1.1.jar:/home/hadoop/hbase/lib/guava-r09.jar:/home/hadoop/hbase/lib/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hbase/lib/hadoop-lzo-0.4.9.jar:/home/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/home/hadoop/hbase/lib/httpclient-4.0.1.jar:/home/hadoop/hbase/lib/httpcore-4.0.1.jar:/home/hadoop/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/home/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase/lib/jaxb-api-2.1.jar:/home/hadoop/hbase/lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase/lib/jersey-core-1.4.jar:/home/hadoop/hbase/lib/jersey-json-1.4.jar:/home/hadoop/hbase/lib/jersey-server-1.4.jar:/home/hadoop/hbase/lib/jettison-1.1.jar:/home/hadoop/hbase/lib/jetty-6.1.26.jar:/home/hadoop/hbase/lib/jetty-util-6.1.26.jar:/home/hadoop/hbase/lib/jruby-complete-1.6.5.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/libthrift-0.7.0.jar:/home/hadoop/hbase/lib/log4j-1.2.16.jar:/home/hadoop/hbase/lib/netty-3.2.4.Final.jar:/home/hadoop/hbase/lib/protobuf-java-2.4.0a.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/servlet-api-2.5.jar:/home/hadoop/hbase/lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/home/hadoop/hbase/lib/stax-api-1.0.1.jar:/home/hadoop/hbase/lib/velocity-1.7.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/zookeeper-3.4.3.jar:
>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client
>> environment:java.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64
>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client
>> environment:os.version=2.6.35-30-generic
>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client environment:user.name=hadoop
>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client
>> environment:user.home=/home/hadoop/
>> 2012-10-01 19:54:12,222 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Client
>> environment:user.dir=/home/gregross
>> 2012-10-01 19:54:12,225 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Initiating client connection,
>> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
>> sessionTimeout=180000 watcher=regionserver:60020
>> 2012-10-01 19:54:12,251 [regionserver60020-SendThread()] INFO
>> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
>> /10.100.102.197:2181
>> 2012-10-01 19:54:12,252 [regionserver60020] INFO
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
>> of this process is 15403@data3024.ngpipes.milp.ngmoco.com
>> 2012-10-01 19:54:12,259
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section
>> 'Client' could not be found. If you are not using SASL, you may ignore
>> this. On the other hand, if you expected SASL to work, please fix your
>> JAAS configuration.
>> 2012-10-01 19:54:12,260
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>> session
>> 2012-10-01 19:54:12,272
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>> server; r-o mode will be unavailable
>> 2012-10-01 19:54:12,273
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
>> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
>> sessionid = 0x137ec64373dd4b5, negotiated timeout = 40000
>> 2012-10-01 19:54:12,289 [main] INFO
>> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown
>> hook thread: Shutdownhook:regionserver60020
>> 2012-10-01 19:54:12,352 [regionserver60020] INFO
>> org.apache.zookeeper.ZooKeeper: Initiating client connection,
>> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
>> sessionTimeout=180000 watcher=hconnection
>> 2012-10-01 19:54:12,353 [regionserver60020-SendThread()] INFO
>> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
>> /10.100.102.197:2181
>> 2012-10-01 19:54:12,353 [regionserver60020] INFO
>> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
>> of this process is 15403@data3024.ngpipes.milp.ngmoco.com
>> 2012-10-01 19:54:12,354
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
>> SASL-authenticate because the default JAAS configuration section
>> 'Client' could not be found. If you are not using SASL, you may ignore
>> this. On the other hand, if you expected SASL to work, please fix your
>> JAAS configuration.
>> 2012-10-01 19:54:12,354
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
>> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
>> session
>> 2012-10-01 19:54:12,361
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
>> server; r-o mode will be unavailable
>> 2012-10-01 19:54:12,361
>> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
>> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
>> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
>> sessionid = 0x137ec64373dd4b6, negotiated timeout = 40000
>> 2012-10-01 19:54:12,384 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
>> globalMemStoreLimit=1.6g, globalMemStoreLimitLowMark=1.4g,
>> maxHeap=3.9g
>> 2012-10-01 19:54:12,400 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Runs every 2hrs,
>> 46mins, 40sec
>> 2012-10-01 19:54:12,420 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect
>> to Master server at
>> namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915
>> 2012-10-01 19:54:12,453 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to
>> master at data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020
>> 2012-10-01 19:54:12,453 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at
>> namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915 that we are
>> up with port=60020, startcode=1349121252040
>> 2012-10-01 19:54:12,476 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us
>> hostname to use. Was=data3024.ngpipes.milp.ngmoco.com,
>> Now=data3024.ngpipes.milp.ngmoco.com
>> 2012-10-01 19:54:12,568 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.wal.HLog: HLog configuration:
>> blocksize=64 MB, rollsize=60.8 MB, enabled=true,
>> optionallogflushinternal=1000ms
>> 2012-10-01 19:54:12,642 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.wal.HLog:  for
>> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1349121252040/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1349121252040.1349121252569
>> 2012-10-01 19:54:12,643 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.wal.HLog: Using
>> getNumCurrentReplicas--HDFS-826
>> 2012-10-01 19:54:12,651 [regionserver60020] INFO
>> org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
>> with processName=RegionServer, sessionId=regionserver60020
>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: MetricsString added: revision
>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser
>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate
>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl
>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: MetricsString added: date
>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision
>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: MetricsString added: user
>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion
>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: MetricsString added: url
>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: MetricsString added: version
>> 2012-10-01 19:54:12,656 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: new MBeanInfo
>> 2012-10-01 19:54:12,657 [regionserver60020] INFO
>> org.apache.hadoop.hbase.metrics: new MBeanInfo
>> 2012-10-01 19:54:12,657 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics:
>> Initialized
>> 2012-10-01 19:54:12,722 [regionserver60020] INFO org.mortbay.log:
>> Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2012-10-01 19:54:12,774 [regionserver60020] INFO
>> org.apache.hadoop.http.HttpServer: Added global filtersafety
>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>> 2012-10-01 19:54:12,787 [regionserver60020] INFO
>> org.apache.hadoop.http.HttpServer: Port returned by
>> webServer.getConnectors()[0].getLocalPort() before open() is -1.
>> Opening the listener on 60030
>> 2012-10-01 19:54:12,787 [regionserver60020] INFO
>> org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned
>> 60030 webServer.getConnectors()[0].getLocalPort() returned 60030
>> 2012-10-01 19:54:12,787 [regionserver60020] INFO
>> org.apache.hadoop.http.HttpServer: Jetty bound to port 60030
>> 2012-10-01 19:54:12,787 [regionserver60020] INFO org.mortbay.log: jetty-6.1.26
>> 2012-10-01 19:54:13,079 [regionserver60020] INFO org.mortbay.log:
>> Started SelectChannelConnector@0.0.0.0:60030
>> 2012-10-01 19:54:13,079 [IPC Server Responder] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting
>> 2012-10-01 19:54:13,079 [IPC Server listener on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60020:
>> starting
>> 2012-10-01 19:54:13,094 [IPC Server handler 0 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
>> starting
>> 2012-10-01 19:54:13,094 [IPC Server handler 1 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
>> starting
>> 2012-10-01 19:54:13,095 [IPC Server handler 2 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
>> starting
>> 2012-10-01 19:54:13,095 [IPC Server handler 3 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
>> starting
>> 2012-10-01 19:54:13,095 [IPC Server handler 4 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
>> starting
>> 2012-10-01 19:54:13,095 [IPC Server handler 5 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
>> starting
>> 2012-10-01 19:54:13,095 [IPC Server handler 6 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
>> starting
>> 2012-10-01 19:54:13,095 [IPC Server handler 7 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
>> starting
>> 2012-10-01 19:54:13,095 [IPC Server handler 8 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
>> starting
>> 2012-10-01 19:54:13,096 [IPC Server handler 9 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
>> starting
>> 2012-10-01 19:54:13,096 [IPC Server handler 10 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
>> starting
>> 2012-10-01 19:54:13,096 [IPC Server handler 11 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
>> starting
>> 2012-10-01 19:54:13,096 [IPC Server handler 12 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
>> starting
>> 2012-10-01 19:54:13,096 [IPC Server handler 13 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
>> starting
>> 2012-10-01 19:54:13,096 [IPC Server handler 14 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
>> starting
>> 2012-10-01 19:54:13,097 [IPC Server handler 15 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
>> starting
>> 2012-10-01 19:54:13,097 [IPC Server handler 16 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
>> starting
>> 2012-10-01 19:54:13,097 [IPC Server handler 17 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
>> starting
>> 2012-10-01 19:54:13,097 [IPC Server handler 18 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
>> starting
>> 2012-10-01 19:54:13,098 [IPC Server handler 19 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
>> starting
>> 2012-10-01 19:54:13,098 [IPC Server handler 20 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
>> starting
>> 2012-10-01 19:54:13,098 [IPC Server handler 21 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
>> starting
>> 2012-10-01 19:54:13,098 [IPC Server handler 22 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
>> starting
>> 2012-10-01 19:54:13,098 [IPC Server handler 23 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
>> starting
>> 2012-10-01 19:54:13,098 [IPC Server handler 24 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
>> starting
>> 2012-10-01 19:54:13,098 [IPC Server handler 25 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
>> starting
>> 2012-10-01 19:54:13,099 [IPC Server handler 26 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
>> starting
>> 2012-10-01 19:54:13,099 [IPC Server handler 27 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
>> starting
>> 2012-10-01 19:54:13,099 [IPC Server handler 28 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
>> starting
>> 2012-10-01 19:54:13,100 [IPC Server handler 29 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
>> starting
>> 2012-10-01 19:54:13,101 [IPC Server handler 30 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
>> starting
>> 2012-10-01 19:54:13,101 [IPC Server handler 31 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
>> starting
>> 2012-10-01 19:54:13,101 [IPC Server handler 32 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
>> starting
>> 2012-10-01 19:54:13,101 [IPC Server handler 33 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
>> starting
>> 2012-10-01 19:54:13,101 [IPC Server handler 34 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
>> starting
>> 2012-10-01 19:54:13,102 [IPC Server handler 35 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
>> starting
>> 2012-10-01 19:54:13,102 [IPC Server handler 36 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
>> starting
>> 2012-10-01 19:54:13,102 [IPC Server handler 37 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
>> starting
>> 2012-10-01 19:54:13,102 [IPC Server handler 38 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
>> starting
>> 2012-10-01 19:54:13,102 [IPC Server handler 39 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
>> starting
>> 2012-10-01 19:54:13,102 [IPC Server handler 40 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
>> starting
>> 2012-10-01 19:54:13,103 [IPC Server handler 41 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
>> starting
>> 2012-10-01 19:54:13,103 [IPC Server handler 42 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
>> starting
>> 2012-10-01 19:54:13,103 [IPC Server handler 43 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
>> starting
>> 2012-10-01 19:54:13,103 [IPC Server handler 44 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
>> starting
>> 2012-10-01 19:54:13,103 [IPC Server handler 45 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
>> starting
>> 2012-10-01 19:54:13,103 [IPC Server handler 46 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
>> starting
>> 2012-10-01 19:54:13,103 [IPC Server handler 47 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
>> starting
>> 2012-10-01 19:54:13,103 [IPC Server handler 48 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
>> starting
>> 2012-10-01 19:54:13,104 [IPC Server handler 49 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
>> starting
>> 2012-10-01 19:54:13,104 [IPC Server handler 50 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
>> starting
>> 2012-10-01 19:54:13,104 [IPC Server handler 51 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
>> starting
>> 2012-10-01 19:54:13,104 [IPC Server handler 52 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
>> starting
>> 2012-10-01 19:54:13,104 [IPC Server handler 53 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
>> starting
>> 2012-10-01 19:54:13,104 [IPC Server handler 54 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
>> starting
>> 2012-10-01 19:54:13,104 [IPC Server handler 55 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
>> starting
>> 2012-10-01 19:54:13,104 [IPC Server handler 56 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
>> starting
>> 2012-10-01 19:54:13,104 [IPC Server handler 57 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
>> starting
>> 2012-10-01 19:54:13,105 [IPC Server handler 58 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
>> starting
>> 2012-10-01 19:54:13,105 [IPC Server handler 59 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
>> starting
>> 2012-10-01 19:54:13,105 [IPC Server handler 60 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
>> starting
>> 2012-10-01 19:54:13,105 [IPC Server handler 61 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
>> starting
>> 2012-10-01 19:54:13,105 [IPC Server handler 62 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
>> starting
>> 2012-10-01 19:54:13,105 [IPC Server handler 63 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
>> starting
>> 2012-10-01 19:54:13,105 [IPC Server handler 64 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
>> starting
>> 2012-10-01 19:54:13,105 [IPC Server handler 65 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
>> starting
>> 2012-10-01 19:54:13,105 [IPC Server handler 66 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
>> starting
>> 2012-10-01 19:54:13,106 [IPC Server handler 67 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
>> starting
>> 2012-10-01 19:54:13,106 [IPC Server handler 68 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
>> starting
>> 2012-10-01 19:54:13,106 [IPC Server handler 69 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
>> starting
>> 2012-10-01 19:54:13,106 [IPC Server handler 70 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
>> starting
>> 2012-10-01 19:54:13,106 [IPC Server handler 71 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
>> starting
>> 2012-10-01 19:54:13,106 [IPC Server handler 72 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
>> starting
>> 2012-10-01 19:54:13,106 [IPC Server handler 73 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
>> starting
>> 2012-10-01 19:54:13,106 [IPC Server handler 74 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
>> starting
>> 2012-10-01 19:54:13,107 [IPC Server handler 75 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
>> starting
>> 2012-10-01 19:54:13,107 [IPC Server handler 76 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
>> starting
>> 2012-10-01 19:54:13,107 [IPC Server handler 77 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
>> starting
>> 2012-10-01 19:54:13,107 [IPC Server handler 78 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
>> starting
>> 2012-10-01 19:54:13,107 [IPC Server handler 79 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
>> starting
>> 2012-10-01 19:54:13,107 [IPC Server handler 80 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
>> starting
>> 2012-10-01 19:54:13,107 [IPC Server handler 81 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
>> starting
>> 2012-10-01 19:54:13,107 [IPC Server handler 82 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
>> starting
>> 2012-10-01 19:54:13,107 [IPC Server handler 83 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
>> starting
>> 2012-10-01 19:54:13,108 [IPC Server handler 84 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
>> starting
>> 2012-10-01 19:54:13,108 [IPC Server handler 85 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
>> starting
>> 2012-10-01 19:54:13,108 [IPC Server handler 86 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
>> starting
>> 2012-10-01 19:54:13,108 [IPC Server handler 87 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
>> starting
>> 2012-10-01 19:54:13,108 [IPC Server handler 88 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
>> starting
>> 2012-10-01 19:54:13,108 [IPC Server handler 89 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
>> starting
>> 2012-10-01 19:54:13,109 [IPC Server handler 90 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
>> starting
>> 2012-10-01 19:54:13,109 [IPC Server handler 91 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
>> starting
>> 2012-10-01 19:54:13,109 [IPC Server handler 92 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
>> starting
>> 2012-10-01 19:54:13,109 [IPC Server handler 93 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
>> starting
>> 2012-10-01 19:54:13,109 [IPC Server handler 94 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
>> starting
>> 2012-10-01 19:54:13,109 [IPC Server handler 95 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
>> starting
>> 2012-10-01 19:54:13,110 [IPC Server handler 96 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
>> starting
>> 2012-10-01 19:54:13,110 [IPC Server handler 97 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
>> starting
>> 2012-10-01 19:54:13,110 [IPC Server handler 98 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
>> starting
>> 2012-10-01 19:54:13,110 [IPC Server handler 99 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
>> starting
>> 2012-10-01 19:54:13,110 [PRI IPC Server handler 0 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
>> starting
>> 2012-10-01 19:54:13,110 [PRI IPC Server handler 1 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
>> starting
>> 2012-10-01 19:54:13,110 [PRI IPC Server handler 2 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
>> starting
>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 3 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
>> starting
>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 4 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
>> starting
>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 5 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
>> starting
>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 6 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
>> starting
>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 7 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
>> starting
>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 8 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
>> starting
>> 2012-10-01 19:54:13,111 [PRI IPC Server handler 9 on 60020] INFO
>> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
>> starting
>> 2012-10-01 19:54:13,124 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as
>> data3024.ngpipes.milp.ngmoco.com,60020,1349121252040, RPC listening on
>> data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020,
>> sessionid=0x137ec64373dd4b5
>> 2012-10-01 19:54:13,124
>> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1349121252040]
>> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
>> SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1349121252040
>> starting
>> 2012-10-01 19:54:13,125 [regionserver60020] INFO
>> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered
>> RegionServer MXBean
>>
>> GC log
>> ======
>>
>> 1.914: [GC 1.914: [ParNew: 99976K->7646K(118016K), 0.0087130 secs]
>> 99976K->7646K(123328K), 0.0088110 secs] [Times: user=0.07 sys=0.00,
>> real=0.00 secs]
>> 416.341: [GC 416.341: [ParNew: 112558K->12169K(118016K), 0.0447760
>> secs] 112558K->25025K(133576K), 0.0450080 secs] [Times: user=0.13
>> sys=0.02, real=0.05 secs]
>> 416.386: [GC [1 CMS-initial-mark: 12855K(15560K)] 25089K(133576K),
>> 0.0037570 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 416.390: [CMS-concurrent-mark-start]
>> 416.407: [CMS-concurrent-mark: 0.015/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 416.407: [CMS-concurrent-preclean-start]
>> 416.408: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 416.408: [GC[YG occupancy: 12233 K (118016 K)]416.408: [Rescan
>> (parallel) , 0.0074970 secs]416.416: [weak refs processing, 0.0000370
>> secs] [1 CMS-remark: 12855K(15560K)] 25089K(133576K), 0.0076480 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 416.416: [CMS-concurrent-sweep-start]
>> 416.419: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 416.419: [CMS-concurrent-reset-start]
>> 416.467: [CMS-concurrent-reset: 0.049/0.049 secs] [Times: user=0.01
>> sys=0.04, real=0.05 secs]
>> 418.468: [GC [1 CMS-initial-mark: 12855K(21428K)] 26216K(139444K),
>> 0.0037020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 418.471: [CMS-concurrent-mark-start]
>> 418.487: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 418.487: [CMS-concurrent-preclean-start]
>> 418.488: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 418.488: [GC[YG occupancy: 13360 K (118016 K)]418.488: [Rescan
>> (parallel) , 0.0090770 secs]418.497: [weak refs processing, 0.0000170
>> secs] [1 CMS-remark: 12855K(21428K)] 26216K(139444K), 0.0092220 secs]
>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>> 418.497: [CMS-concurrent-sweep-start]
>> 418.500: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 418.500: [CMS-concurrent-reset-start]
>> 418.511: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 420.512: [GC [1 CMS-initial-mark: 12854K(21428K)] 26344K(139444K),
>> 0.0041050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 420.516: [CMS-concurrent-mark-start]
>> 420.532: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
>> sys=0.01, real=0.01 secs]
>> 420.532: [CMS-concurrent-preclean-start]
>> 420.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 420.533: [GC[YG occupancy: 13489 K (118016 K)]420.533: [Rescan
>> (parallel) , 0.0014850 secs]420.534: [weak refs processing, 0.0000130
>> secs] [1 CMS-remark: 12854K(21428K)] 26344K(139444K), 0.0015920 secs]
>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 420.534: [CMS-concurrent-sweep-start]
>> 420.537: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 420.537: [CMS-concurrent-reset-start]
>> 420.548: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 422.437: [GC [1 CMS-initial-mark: 12854K(21428K)] 28692K(139444K),
>> 0.0051030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 422.443: [CMS-concurrent-mark-start]
>> 422.458: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 422.458: [CMS-concurrent-preclean-start]
>> 422.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 422.458: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 427.541:
>> [CMS-concurrent-abortable-preclean: 0.678/5.083 secs] [Times:
>> user=0.66 sys=0.00, real=5.08 secs]
>> 427.541: [GC[YG occupancy: 16198 K (118016 K)]427.541: [Rescan
>> (parallel) , 0.0013750 secs]427.543: [weak refs processing, 0.0000140
>> secs] [1 CMS-remark: 12854K(21428K)] 29053K(139444K), 0.0014800 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 427.543: [CMS-concurrent-sweep-start]
>> 427.544: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 427.544: [CMS-concurrent-reset-start]
>> 427.557: [CMS-concurrent-reset: 0.013/0.013 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 429.557: [GC [1 CMS-initial-mark: 12854K(21428K)] 30590K(139444K),
>> 0.0043280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 429.562: [CMS-concurrent-mark-start]
>> 429.574: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
>> sys=0.00, real=0.02 secs]
>> 429.574: [CMS-concurrent-preclean-start]
>> 429.575: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 429.575: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 434.626:
>> [CMS-concurrent-abortable-preclean: 0.747/5.051 secs] [Times:
>> user=0.74 sys=0.00, real=5.05 secs]
>> 434.626: [GC[YG occupancy: 18154 K (118016 K)]434.626: [Rescan
>> (parallel) , 0.0015440 secs]434.627: [weak refs processing, 0.0000140
>> secs] [1 CMS-remark: 12854K(21428K)] 31009K(139444K), 0.0016500 secs]
>> [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 434.628: [CMS-concurrent-sweep-start]
>> 434.629: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 434.629: [CMS-concurrent-reset-start]
>> 434.641: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 436.641: [GC [1 CMS-initial-mark: 12854K(21428K)] 31137K(139444K),
>> 0.0043440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 436.646: [CMS-concurrent-mark-start]
>> 436.660: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 436.660: [CMS-concurrent-preclean-start]
>> 436.661: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 436.661: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 441.773:
>> [CMS-concurrent-abortable-preclean: 0.608/5.112 secs] [Times:
>> user=0.60 sys=0.00, real=5.11 secs]
>> 441.773: [GC[YG occupancy: 18603 K (118016 K)]441.773: [Rescan
>> (parallel) , 0.0024270 secs]441.776: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12854K(21428K)] 31458K(139444K), 0.0025200 secs]
>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 441.776: [CMS-concurrent-sweep-start]
>> 441.777: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 441.777: [CMS-concurrent-reset-start]
>> 441.788: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 443.788: [GC [1 CMS-initial-mark: 12854K(21428K)] 31586K(139444K),
>> 0.0044590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 443.793: [CMS-concurrent-mark-start]
>> 443.804: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.04
>> sys=0.00, real=0.02 secs]
>> 443.804: [CMS-concurrent-preclean-start]
>> 443.805: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 443.805: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 448.821:
>> [CMS-concurrent-abortable-preclean: 0.813/5.016 secs] [Times:
>> user=0.81 sys=0.00, real=5.01 secs]
>> 448.822: [GC[YG occupancy: 19052 K (118016 K)]448.822: [Rescan
>> (parallel) , 0.0013990 secs]448.823: [weak refs processing, 0.0000140
>> secs] [1 CMS-remark: 12854K(21428K)] 31907K(139444K), 0.0015040 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 448.823: [CMS-concurrent-sweep-start]
>> 448.825: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 448.825: [CMS-concurrent-reset-start]
>> 448.837: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 450.837: [GC [1 CMS-initial-mark: 12854K(21428K)] 32035K(139444K),
>> 0.0044510 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 450.842: [CMS-concurrent-mark-start]
>> 450.857: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 450.857: [CMS-concurrent-preclean-start]
>> 450.858: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 450.858: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 455.922:
>> [CMS-concurrent-abortable-preclean: 0.726/5.064 secs] [Times:
>> user=0.73 sys=0.00, real=5.06 secs]
>> 455.922: [GC[YG occupancy: 19542 K (118016 K)]455.922: [Rescan
>> (parallel) , 0.0016050 secs]455.924: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12854K(21428K)] 32397K(139444K), 0.0017340 secs]
>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>> 455.924: [CMS-concurrent-sweep-start]
>> 455.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 455.927: [CMS-concurrent-reset-start]
>> 455.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 457.936: [GC [1 CMS-initial-mark: 12854K(21428K)] 32525K(139444K),
>> 0.0026740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 457.939: [CMS-concurrent-mark-start]
>> 457.950: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 457.950: [CMS-concurrent-preclean-start]
>> 457.950: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 457.950: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 463.065:
>> [CMS-concurrent-abortable-preclean: 0.708/5.115 secs] [Times:
>> user=0.71 sys=0.00, real=5.12 secs]
>> 463.066: [GC[YG occupancy: 19991 K (118016 K)]463.066: [Rescan
>> (parallel) , 0.0013940 secs]463.067: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12854K(21428K)] 32846K(139444K), 0.0015000 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 463.067: [CMS-concurrent-sweep-start]
>> 463.070: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 463.070: [CMS-concurrent-reset-start]
>> 463.080: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 465.080: [GC [1 CMS-initial-mark: 12854K(21428K)] 32974K(139444K),
>> 0.0027070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 465.083: [CMS-concurrent-mark-start]
>> 465.096: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 465.096: [CMS-concurrent-preclean-start]
>> 465.096: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 465.096: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 470.123:
>> [CMS-concurrent-abortable-preclean: 0.723/5.027 secs] [Times:
>> user=0.71 sys=0.00, real=5.03 secs]
>> 470.124: [GC[YG occupancy: 20440 K (118016 K)]470.124: [Rescan
>> (parallel) , 0.0011990 secs]470.125: [weak refs processing, 0.0000130
>> secs] [1 CMS-remark: 12854K(21428K)] 33295K(139444K), 0.0012990 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 470.125: [CMS-concurrent-sweep-start]
>> 470.127: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 470.127: [CMS-concurrent-reset-start]
>> 470.137: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 472.137: [GC [1 CMS-initial-mark: 12854K(21428K)] 33423K(139444K),
>> 0.0041330 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 472.141: [CMS-concurrent-mark-start]
>> 472.155: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.02 secs]
>> 472.155: [CMS-concurrent-preclean-start]
>> 472.156: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 472.156: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 477.179:
>> [CMS-concurrent-abortable-preclean: 0.618/5.023 secs] [Times:
>> user=0.62 sys=0.00, real=5.02 secs]
>> 477.179: [GC[YG occupancy: 20889 K (118016 K)]477.179: [Rescan
>> (parallel) , 0.0014510 secs]477.180: [weak refs processing, 0.0000090
>> secs] [1 CMS-remark: 12854K(21428K)] 33744K(139444K), 0.0015250 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 477.181: [CMS-concurrent-sweep-start]
>> 477.183: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 477.183: [CMS-concurrent-reset-start]
>> 477.192: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 479.192: [GC [1 CMS-initial-mark: 12854K(21428K)] 33872K(139444K),
>> 0.0039730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 479.196: [CMS-concurrent-mark-start]
>> 479.209: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 479.209: [CMS-concurrent-preclean-start]
>> 479.210: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 479.210: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 484.295:
>> [CMS-concurrent-abortable-preclean: 0.757/5.085 secs] [Times:
>> user=0.77 sys=0.00, real=5.09 secs]
>> 484.295: [GC[YG occupancy: 21583 K (118016 K)]484.295: [Rescan
>> (parallel) , 0.0013210 secs]484.297: [weak refs processing, 0.0000150
>> secs] [1 CMS-remark: 12854K(21428K)] 34438K(139444K), 0.0014200 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 484.297: [CMS-concurrent-sweep-start]
>> 484.298: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 484.298: [CMS-concurrent-reset-start]
>> 484.307: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 486.308: [GC [1 CMS-initial-mark: 12854K(21428K)] 34566K(139444K),
>> 0.0041800 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 486.312: [CMS-concurrent-mark-start]
>> 486.324: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
>> sys=0.00, real=0.02 secs]
>> 486.324: [CMS-concurrent-preclean-start]
>> 486.324: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 486.324: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 491.394:
>> [CMS-concurrent-abortable-preclean: 0.565/5.070 secs] [Times:
>> user=0.56 sys=0.00, real=5.06 secs]
>> 491.394: [GC[YG occupancy: 22032 K (118016 K)]491.395: [Rescan
>> (parallel) , 0.0018030 secs]491.396: [weak refs processing, 0.0000090
>> secs] [1 CMS-remark: 12854K(21428K)] 34887K(139444K), 0.0018830 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 491.397: [CMS-concurrent-sweep-start]
>> 491.398: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 491.398: [CMS-concurrent-reset-start]
>> 491.406: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 493.407: [GC [1 CMS-initial-mark: 12854K(21428K)] 35080K(139444K),
>> 0.0027620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 493.410: [CMS-concurrent-mark-start]
>> 493.420: [CMS-concurrent-mark: 0.010/0.010 secs] [Times: user=0.04
>> sys=0.00, real=0.01 secs]
>> 493.420: [CMS-concurrent-preclean-start]
>> 493.420: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 493.420: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 498.525:
>> [CMS-concurrent-abortable-preclean: 0.600/5.106 secs] [Times:
>> user=0.61 sys=0.00, real=5.11 secs]
>> 498.526: [GC[YG occupancy: 22545 K (118016 K)]498.526: [Rescan
>> (parallel) , 0.0019450 secs]498.528: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12854K(21428K)] 35400K(139444K), 0.0020460 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 498.528: [CMS-concurrent-sweep-start]
>> 498.530: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 498.530: [CMS-concurrent-reset-start]
>> 498.538: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 500.538: [GC [1 CMS-initial-mark: 12854K(21428K)] 35529K(139444K),
>> 0.0027790 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 500.541: [CMS-concurrent-mark-start]
>> 500.554: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 500.554: [CMS-concurrent-preclean-start]
>> 500.554: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 500.554: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 505.616:
>> [CMS-concurrent-abortable-preclean: 0.557/5.062 secs] [Times:
>> user=0.56 sys=0.00, real=5.06 secs]
>> 505.617: [GC[YG occupancy: 22995 K (118016 K)]505.617: [Rescan
>> (parallel) , 0.0023440 secs]505.619: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12854K(21428K)] 35850K(139444K), 0.0024280 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 505.619: [CMS-concurrent-sweep-start]
>> 505.621: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 505.621: [CMS-concurrent-reset-start]
>> 505.629: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 507.630: [GC [1 CMS-initial-mark: 12854K(21428K)] 35978K(139444K),
>> 0.0027500 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 507.632: [CMS-concurrent-mark-start]
>> 507.645: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>> sys=0.00, real=0.02 secs]
>> 507.645: [CMS-concurrent-preclean-start]
>> 507.646: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 507.646: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 512.697:
>> [CMS-concurrent-abortable-preclean: 0.562/5.051 secs] [Times:
>> user=0.57 sys=0.00, real=5.05 secs]
>> 512.697: [GC[YG occupancy: 23484 K (118016 K)]512.697: [Rescan
>> (parallel) , 0.0020030 secs]512.699: [weak refs processing, 0.0000090
>> secs] [1 CMS-remark: 12854K(21428K)] 36339K(139444K), 0.0020830 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 512.700: [CMS-concurrent-sweep-start]
>> 512.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 512.701: [CMS-concurrent-reset-start]
>> 512.709: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 514.710: [GC [1 CMS-initial-mark: 12854K(21428K)] 36468K(139444K),
>> 0.0028400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 514.713: [CMS-concurrent-mark-start]
>> 514.725: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>> sys=0.00, real=0.02 secs]
>> 514.725: [CMS-concurrent-preclean-start]
>> 514.725: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 514.725: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 519.800:
>> [CMS-concurrent-abortable-preclean: 0.619/5.075 secs] [Times:
>> user=0.66 sys=0.00, real=5.07 secs]
>> 519.801: [GC[YG occupancy: 25022 K (118016 K)]519.801: [Rescan
>> (parallel) , 0.0023950 secs]519.803: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12854K(21428K)] 37877K(139444K), 0.0024980 secs]
>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>> 519.803: [CMS-concurrent-sweep-start]
>> 519.805: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 519.805: [CMS-concurrent-reset-start]
>> 519.813: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 521.814: [GC [1 CMS-initial-mark: 12854K(21428K)] 38005K(139444K),
>> 0.0045520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 521.818: [CMS-concurrent-mark-start]
>> 521.833: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 521.833: [CMS-concurrent-preclean-start]
>> 521.833: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 521.833: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 526.840:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 526.840: [GC[YG occupancy: 25471 K (118016 K)]526.840: [Rescan
>> (parallel) , 0.0024440 secs]526.843: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12854K(21428K)] 38326K(139444K), 0.0025440 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 526.843: [CMS-concurrent-sweep-start]
>> 526.845: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 526.845: [CMS-concurrent-reset-start]
>> 526.853: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 528.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 38449K(139444K),
>> 0.0045550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 528.858: [CMS-concurrent-mark-start]
>> 528.872: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 528.872: [CMS-concurrent-preclean-start]
>> 528.873: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 528.873: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 533.876:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 533.876: [GC[YG occupancy: 25919 K (118016 K)]533.877: [Rescan
>> (parallel) , 0.0028370 secs]533.879: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 38769K(139444K), 0.0029390 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 533.880: [CMS-concurrent-sweep-start]
>> 533.882: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 533.882: [CMS-concurrent-reset-start]
>> 533.891: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 535.891: [GC [1 CMS-initial-mark: 12849K(21428K)] 38897K(139444K),
>> 0.0046460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 535.896: [CMS-concurrent-mark-start]
>> 535.910: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 535.910: [CMS-concurrent-preclean-start]
>> 535.911: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 535.911: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 540.917:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 540.917: [GC[YG occupancy: 26367 K (118016 K)]540.917: [Rescan
>> (parallel) , 0.0025680 secs]540.920: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 39217K(139444K), 0.0026690 secs]
>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>> 540.920: [CMS-concurrent-sweep-start]
>> 540.922: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 540.922: [CMS-concurrent-reset-start]
>> 540.930: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 542.466: [GC [1 CMS-initial-mark: 12849K(21428K)] 39555K(139444K),
>> 0.0050040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 542.471: [CMS-concurrent-mark-start]
>> 542.486: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.05
>> sys=0.00, real=0.02 secs]
>> 542.486: [CMS-concurrent-preclean-start]
>> 542.486: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 542.486: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 547.491:
>> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 547.491: [GC[YG occupancy: 27066 K (118016 K)]547.491: [Rescan
>> (parallel) , 0.0024720 secs]547.494: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 39916K(139444K), 0.0025720 secs]
>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>> 547.494: [CMS-concurrent-sweep-start]
>> 547.496: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 547.496: [CMS-concurrent-reset-start]
>> 547.505: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 549.506: [GC [1 CMS-initial-mark: 12849K(21428K)] 40044K(139444K),
>> 0.0048760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 549.511: [CMS-concurrent-mark-start]
>> 549.524: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 549.524: [CMS-concurrent-preclean-start]
>> 549.525: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 549.525: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 554.530:
>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 554.530: [GC[YG occupancy: 27515 K (118016 K)]554.530: [Rescan
>> (parallel) , 0.0025270 secs]554.533: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 40364K(139444K), 0.0026190 secs]
>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>> 554.533: [CMS-concurrent-sweep-start]
>> 554.534: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 554.534: [CMS-concurrent-reset-start]
>> 554.542: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 556.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 40493K(139444K),
>> 0.0048950 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 556.548: [CMS-concurrent-mark-start]
>> 556.562: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 556.562: [CMS-concurrent-preclean-start]
>> 556.562: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 556.563: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 561.565:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 561.566: [GC[YG occupancy: 27963 K (118016 K)]561.566: [Rescan
>> (parallel) , 0.0025900 secs]561.568: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 40813K(139444K), 0.0026910 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 561.569: [CMS-concurrent-sweep-start]
>> 561.570: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 561.570: [CMS-concurrent-reset-start]
>> 561.578: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 563.579: [GC [1 CMS-initial-mark: 12849K(21428K)] 40941K(139444K),
>> 0.0049390 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 563.584: [CMS-concurrent-mark-start]
>> 563.598: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 563.598: [CMS-concurrent-preclean-start]
>> 563.598: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 563.598: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 568.693:
>> [CMS-concurrent-abortable-preclean: 0.717/5.095 secs] [Times:
>> user=0.71 sys=0.00, real=5.09 secs]
>> 568.694: [GC[YG occupancy: 28411 K (118016 K)]568.694: [Rescan
>> (parallel) , 0.0035750 secs]568.697: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 41261K(139444K), 0.0036740 secs]
>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>> 568.698: [CMS-concurrent-sweep-start]
>> 568.700: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 568.700: [CMS-concurrent-reset-start]
>> 568.709: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 570.709: [GC [1 CMS-initial-mark: 12849K(21428K)] 41389K(139444K),
>> 0.0048710 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 570.714: [CMS-concurrent-mark-start]
>> 570.729: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 570.729: [CMS-concurrent-preclean-start]
>> 570.729: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 570.729: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 575.738:
>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 575.738: [GC[YG occupancy: 28900 K (118016 K)]575.738: [Rescan
>> (parallel) , 0.0036390 secs]575.742: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 41750K(139444K), 0.0037440 secs]
>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>> 575.742: [CMS-concurrent-sweep-start]
>> 575.744: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 575.744: [CMS-concurrent-reset-start]
>> 575.752: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 577.752: [GC [1 CMS-initial-mark: 12849K(21428K)] 41878K(139444K),
>> 0.0050100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 577.758: [CMS-concurrent-mark-start]
>> 577.772: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 577.772: [CMS-concurrent-preclean-start]
>> 577.773: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 577.773: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 582.779:
>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 582.779: [GC[YG occupancy: 29348 K (118016 K)]582.779: [Rescan
>> (parallel) , 0.0026100 secs]582.782: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 42198K(139444K), 0.0027110 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 582.782: [CMS-concurrent-sweep-start]
>> 582.784: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 582.784: [CMS-concurrent-reset-start]
>> 582.792: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 584.792: [GC [1 CMS-initial-mark: 12849K(21428K)] 42326K(139444K),
>> 0.0050510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 584.798: [CMS-concurrent-mark-start]
>> 584.812: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 584.812: [CMS-concurrent-preclean-start]
>> 584.813: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 584.813: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 589.819:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 589.819: [GC[YG occupancy: 29797 K (118016 K)]589.819: [Rescan
>> (parallel) , 0.0039510 secs]589.823: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 42647K(139444K), 0.0040460 secs]
>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>> 589.824: [CMS-concurrent-sweep-start]
>> 589.826: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 589.826: [CMS-concurrent-reset-start]
>> 589.835: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 591.835: [GC [1 CMS-initial-mark: 12849K(21428K)] 42775K(139444K),
>> 0.0050090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 591.840: [CMS-concurrent-mark-start]
>> 591.855: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 591.855: [CMS-concurrent-preclean-start]
>> 591.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 591.855: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 596.857:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 596.857: [GC[YG occupancy: 31414 K (118016 K)]596.857: [Rescan
>> (parallel) , 0.0028500 secs]596.860: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 44264K(139444K), 0.0029480 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 596.861: [CMS-concurrent-sweep-start]
>> 596.862: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 596.862: [CMS-concurrent-reset-start]
>> 596.870: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 598.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 44392K(139444K),
>> 0.0050640 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 598.876: [CMS-concurrent-mark-start]
>> 598.890: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 598.890: [CMS-concurrent-preclean-start]
>> 598.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 598.891: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 603.897:
>> [CMS-concurrent-abortable-preclean: 0.705/5.007 secs] [Times:
>> user=0.72 sys=0.00, real=5.01 secs]
>> 603.898: [GC[YG occupancy: 32032 K (118016 K)]603.898: [Rescan
>> (parallel) , 0.0039660 secs]603.902: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 44882K(139444K), 0.0040680 secs]
>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>> 603.902: [CMS-concurrent-sweep-start]
>> 603.903: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 603.903: [CMS-concurrent-reset-start]
>> 603.912: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 605.912: [GC [1 CMS-initial-mark: 12849K(21428K)] 45010K(139444K),
>> 0.0053650 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 605.918: [CMS-concurrent-mark-start]
>> 605.932: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 605.932: [CMS-concurrent-preclean-start]
>> 605.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 605.932: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 610.939:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 610.940: [GC[YG occupancy: 32481 K (118016 K)]610.940: [Rescan
>> (parallel) , 0.0032540 secs]610.943: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 45330K(139444K), 0.0033560 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 610.943: [CMS-concurrent-sweep-start]
>> 610.944: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 610.945: [CMS-concurrent-reset-start]
>> 610.953: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 612.486: [GC [1 CMS-initial-mark: 12849K(21428K)] 45459K(139444K),
>> 0.0055070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 612.492: [CMS-concurrent-mark-start]
>> 612.505: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 612.505: [CMS-concurrent-preclean-start]
>> 612.506: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 612.506: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 617.511:
>> [CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 617.512: [GC[YG occupancy: 32929 K (118016 K)]617.512: [Rescan
>> (parallel) , 0.0037500 secs]617.516: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 45779K(139444K), 0.0038560 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 617.516: [CMS-concurrent-sweep-start]
>> 617.518: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 617.518: [CMS-concurrent-reset-start]
>> 617.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 619.528: [GC [1 CMS-initial-mark: 12849K(21428K)] 45907K(139444K),
>> 0.0053320 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 619.533: [CMS-concurrent-mark-start]
>> 619.546: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>> sys=0.00, real=0.02 secs]
>> 619.546: [CMS-concurrent-preclean-start]
>> 619.547: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 619.547: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 624.552:
>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 624.552: [GC[YG occupancy: 33377 K (118016 K)]624.552: [Rescan
>> (parallel) , 0.0037290 secs]624.556: [weak refs processing, 0.0000130
>> secs] [1 CMS-remark: 12849K(21428K)] 46227K(139444K), 0.0038330 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 624.556: [CMS-concurrent-sweep-start]
>> 624.558: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 624.558: [CMS-concurrent-reset-start]
>> 624.568: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 626.568: [GC [1 CMS-initial-mark: 12849K(21428K)] 46355K(139444K),
>> 0.0054240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 626.574: [CMS-concurrent-mark-start]
>> 626.588: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 626.588: [CMS-concurrent-preclean-start]
>> 626.588: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 626.588: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 631.592:
>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 631.592: [GC[YG occupancy: 33825 K (118016 K)]631.593: [Rescan
>> (parallel) , 0.0041600 secs]631.597: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 46675K(139444K), 0.0042650 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 631.597: [CMS-concurrent-sweep-start]
>> 631.598: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 631.598: [CMS-concurrent-reset-start]
>> 631.607: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 632.495: [GC [1 CMS-initial-mark: 12849K(21428K)] 46839K(139444K),
>> 0.0054380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 632.501: [CMS-concurrent-mark-start]
>> 632.516: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 632.516: [CMS-concurrent-preclean-start]
>> 632.517: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 632.517: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 637.519:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 637.519: [GC[YG occupancy: 34350 K (118016 K)]637.519: [Rescan
>> (parallel) , 0.0025310 secs]637.522: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 47200K(139444K), 0.0026540 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 637.522: [CMS-concurrent-sweep-start]
>> 637.523: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 637.523: [CMS-concurrent-reset-start]
>> 637.532: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 639.532: [GC [1 CMS-initial-mark: 12849K(21428K)] 47328K(139444K),
>> 0.0055330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 639.538: [CMS-concurrent-mark-start]
>> 639.551: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 639.551: [CMS-concurrent-preclean-start]
>> 639.552: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 639.552: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 644.561:
>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 644.561: [GC[YG occupancy: 34798 K (118016 K)]644.561: [Rescan
>> (parallel) , 0.0040620 secs]644.565: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 47648K(139444K), 0.0041610 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 644.566: [CMS-concurrent-sweep-start]
>> 644.568: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 644.568: [CMS-concurrent-reset-start]
>> 644.577: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 646.577: [GC [1 CMS-initial-mark: 12849K(21428K)] 47776K(139444K),
>> 0.0054660 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 646.583: [CMS-concurrent-mark-start]
>> 646.596: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 646.596: [CMS-concurrent-preclean-start]
>> 646.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 646.597: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 651.678:
>> [CMS-concurrent-abortable-preclean: 0.732/5.081 secs] [Times:
>> user=0.74 sys=0.00, real=5.08 secs]
>> 651.678: [GC[YG occupancy: 35246 K (118016 K)]651.678: [Rescan
>> (parallel) , 0.0025920 secs]651.681: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 48096K(139444K), 0.0026910 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 651.681: [CMS-concurrent-sweep-start]
>> 651.682: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 651.682: [CMS-concurrent-reset-start]
>> 651.690: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 653.691: [GC [1 CMS-initial-mark: 12849K(21428K)] 48224K(139444K),
>> 0.0055640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 653.696: [CMS-concurrent-mark-start]
>> 653.711: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 653.711: [CMS-concurrent-preclean-start]
>> 653.711: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 653.711: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 658.721:
>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 658.721: [GC[YG occupancy: 35695 K (118016 K)]658.721: [Rescan
>> (parallel) , 0.0040160 secs]658.725: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 48545K(139444K), 0.0041130 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 658.725: [CMS-concurrent-sweep-start]
>> 658.727: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 658.728: [CMS-concurrent-reset-start]
>> 658.737: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 660.737: [GC [1 CMS-initial-mark: 12849K(21428K)] 48673K(139444K),
>> 0.0055230 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 660.743: [CMS-concurrent-mark-start]
>> 660.756: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 660.756: [CMS-concurrent-preclean-start]
>> 660.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 660.757: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 665.767:
>> [CMS-concurrent-abortable-preclean: 0.704/5.011 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 665.768: [GC[YG occupancy: 36289 K (118016 K)]665.768: [Rescan
>> (parallel) , 0.0033040 secs]665.771: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 49139K(139444K), 0.0034090 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 665.771: [CMS-concurrent-sweep-start]
>> 665.773: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 665.773: [CMS-concurrent-reset-start]
>> 665.781: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 667.781: [GC [1 CMS-initial-mark: 12849K(21428K)] 49267K(139444K),
>> 0.0057830 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 667.787: [CMS-concurrent-mark-start]
>> 667.802: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 667.802: [CMS-concurrent-preclean-start]
>> 667.802: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 667.802: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 672.809:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 672.810: [GC[YG occupancy: 36737 K (118016 K)]672.810: [Rescan
>> (parallel) , 0.0037010 secs]672.813: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 49587K(139444K), 0.0038010 secs]
>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>> 672.814: [CMS-concurrent-sweep-start]
>> 672.815: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 672.815: [CMS-concurrent-reset-start]
>> 672.824: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 674.824: [GC [1 CMS-initial-mark: 12849K(21428K)] 49715K(139444K),
>> 0.0058920 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 674.830: [CMS-concurrent-mark-start]
>> 674.845: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 674.845: [CMS-concurrent-preclean-start]
>> 674.845: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 674.845: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 679.849:
>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 679.850: [GC[YG occupancy: 37185 K (118016 K)]679.850: [Rescan
>> (parallel) , 0.0033420 secs]679.853: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 50035K(139444K), 0.0034440 secs]
>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>> 679.853: [CMS-concurrent-sweep-start]
>> 679.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 679.855: [CMS-concurrent-reset-start]
>> 679.863: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 681.864: [GC [1 CMS-initial-mark: 12849K(21428K)] 50163K(139444K),
>> 0.0058780 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 681.870: [CMS-concurrent-mark-start]
>> 681.884: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 681.884: [CMS-concurrent-preclean-start]
>> 681.884: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 681.884: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 686.890:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 686.891: [GC[YG occupancy: 37634 K (118016 K)]686.891: [Rescan
>> (parallel) , 0.0044480 secs]686.895: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 50483K(139444K), 0.0045570 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 686.896: [CMS-concurrent-sweep-start]
>> 686.897: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 686.897: [CMS-concurrent-reset-start]
>> 686.905: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 688.905: [GC [1 CMS-initial-mark: 12849K(21428K)] 50612K(139444K),
>> 0.0058940 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 688.911: [CMS-concurrent-mark-start]
>> 688.925: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 688.925: [CMS-concurrent-preclean-start]
>> 688.925: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 688.926: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 694.041:
>> [CMS-concurrent-abortable-preclean: 0.718/5.115 secs] [Times:
>> user=0.72 sys=0.00, real=5.11 secs]
>> 694.041: [GC[YG occupancy: 38122 K (118016 K)]694.041: [Rescan
>> (parallel) , 0.0028640 secs]694.044: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 50972K(139444K), 0.0029660 secs]
>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>> 694.044: [CMS-concurrent-sweep-start]
>> 694.046: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 694.046: [CMS-concurrent-reset-start]
>> 694.054: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 696.054: [GC [1 CMS-initial-mark: 12849K(21428K)] 51100K(139444K),
>> 0.0060550 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 696.060: [CMS-concurrent-mark-start]
>> 696.074: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 696.074: [CMS-concurrent-preclean-start]
>> 696.075: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 696.075: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 701.078:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 701.079: [GC[YG occupancy: 38571 K (118016 K)]701.079: [Rescan
>> (parallel) , 0.0064210 secs]701.085: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 51421K(139444K), 0.0065220 secs]
>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>> 701.085: [CMS-concurrent-sweep-start]
>> 701.087: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 701.088: [CMS-concurrent-reset-start]
>> 701.097: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 703.097: [GC [1 CMS-initial-mark: 12849K(21428K)] 51549K(139444K),
>> 0.0058470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 703.103: [CMS-concurrent-mark-start]
>> 703.116: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
>> sys=0.00, real=0.02 secs]
>> 703.116: [CMS-concurrent-preclean-start]
>> 703.117: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 703.117: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 708.125:
>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 708.125: [GC[YG occupancy: 39054 K (118016 K)]708.125: [Rescan
>> (parallel) , 0.0037190 secs]708.129: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 51904K(139444K), 0.0038220 secs]
>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>> 708.129: [CMS-concurrent-sweep-start]
>> 708.131: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 708.131: [CMS-concurrent-reset-start]
>> 708.139: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 710.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 52032K(139444K),
>> 0.0059770 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 710.145: [CMS-concurrent-mark-start]
>> 710.158: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 710.158: [CMS-concurrent-preclean-start]
>> 710.158: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 710.158: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 715.169:
>> [CMS-concurrent-abortable-preclean: 0.705/5.011 secs] [Times:
>> user=0.69 sys=0.01, real=5.01 secs]
>> 715.169: [GC[YG occupancy: 39503 K (118016 K)]715.169: [Rescan
>> (parallel) , 0.0042370 secs]715.173: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 52353K(139444K), 0.0043410 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 715.174: [CMS-concurrent-sweep-start]
>> 715.176: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 715.176: [CMS-concurrent-reset-start]
>> 715.185: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 717.185: [GC [1 CMS-initial-mark: 12849K(21428K)] 52481K(139444K),
>> 0.0060050 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 717.191: [CMS-concurrent-mark-start]
>> 717.205: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 717.205: [CMS-concurrent-preclean-start]
>> 717.206: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 717.206: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 722.209:
>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>> user=0.71 sys=0.00, real=5.00 secs]
>> 722.210: [GC[YG occupancy: 40161 K (118016 K)]722.210: [Rescan
>> (parallel) , 0.0041630 secs]722.214: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 53011K(139444K), 0.0042630 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 722.214: [CMS-concurrent-sweep-start]
>> 722.216: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 722.216: [CMS-concurrent-reset-start]
>> 722.226: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 722.521: [GC [1 CMS-initial-mark: 12849K(21428K)] 53099K(139444K),
>> 0.0062380 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 722.528: [CMS-concurrent-mark-start]
>> 722.544: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.05
>> sys=0.01, real=0.02 secs]
>> 722.544: [CMS-concurrent-preclean-start]
>> 722.544: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 722.544: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 727.558:
>> [CMS-concurrent-abortable-preclean: 0.709/5.014 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 727.558: [GC[YG occupancy: 40610 K (118016 K)]727.558: [Rescan
>> (parallel) , 0.0041700 secs]727.563: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 53460K(139444K), 0.0042780 secs]
>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>> 727.563: [CMS-concurrent-sweep-start]
>> 727.564: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 727.564: [CMS-concurrent-reset-start]
>> 727.573: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.02 secs]
>> 729.574: [GC [1 CMS-initial-mark: 12849K(21428K)] 53588K(139444K),
>> 0.0062700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 729.580: [CMS-concurrent-mark-start]
>> 729.595: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.02 secs]
>> 729.595: [CMS-concurrent-preclean-start]
>> 729.595: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 729.595: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 734.597:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 734.597: [GC[YG occupancy: 41058 K (118016 K)]734.597: [Rescan
>> (parallel) , 0.0053870 secs]734.603: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 53908K(139444K), 0.0054870 secs]
>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>> 734.603: [CMS-concurrent-sweep-start]
>> 734.604: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 734.604: [CMS-concurrent-reset-start]
>> 734.614: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 734.877: [GC [1 CMS-initial-mark: 12849K(21428K)] 53908K(139444K),
>> 0.0067230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 734.884: [CMS-concurrent-mark-start]
>> 734.899: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 734.899: [CMS-concurrent-preclean-start]
>> 734.899: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 734.899: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 739.905:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 739.906: [GC[YG occupancy: 41379 K (118016 K)]739.906: [Rescan
>> (parallel) , 0.0050680 secs]739.911: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 54228K(139444K), 0.0051690 secs]
>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>> 739.911: [CMS-concurrent-sweep-start]
>> 739.912: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 739.912: [CMS-concurrent-reset-start]
>> 739.921: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 741.922: [GC [1 CMS-initial-mark: 12849K(21428K)] 54356K(139444K),
>> 0.0062880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 741.928: [CMS-concurrent-mark-start]
>> 741.942: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 741.942: [CMS-concurrent-preclean-start]
>> 741.943: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 741.943: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 747.059:
>> [CMS-concurrent-abortable-preclean: 0.711/5.117 secs] [Times:
>> user=0.71 sys=0.00, real=5.12 secs]
>> 747.060: [GC[YG occupancy: 41827 K (118016 K)]747.060: [Rescan
>> (parallel) , 0.0051040 secs]747.065: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 54677K(139444K), 0.0052090 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 747.065: [CMS-concurrent-sweep-start]
>> 747.067: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 747.067: [CMS-concurrent-reset-start]
>> 747.075: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 749.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 54805K(139444K),
>> 0.0063470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 749.082: [CMS-concurrent-mark-start]
>> 749.095: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 749.095: [CMS-concurrent-preclean-start]
>> 749.096: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 749.096: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 754.175:
>> [CMS-concurrent-abortable-preclean: 0.718/5.079 secs] [Times:
>> user=0.72 sys=0.00, real=5.08 secs]
>> 754.175: [GC[YG occupancy: 42423 K (118016 K)]754.175: [Rescan
>> (parallel) , 0.0051290 secs]754.180: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 55273K(139444K), 0.0052290 secs]
>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>> 754.181: [CMS-concurrent-sweep-start]
>> 754.182: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 754.182: [CMS-concurrent-reset-start]
>> 754.191: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 756.191: [GC [1 CMS-initial-mark: 12849K(21428K)] 55401K(139444K),
>> 0.0064020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 756.198: [CMS-concurrent-mark-start]
>> 756.212: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 756.212: [CMS-concurrent-preclean-start]
>> 756.213: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 756.213: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 761.217:
>> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 761.218: [GC[YG occupancy: 42871 K (118016 K)]761.218: [Rescan
>> (parallel) , 0.0052310 secs]761.223: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 55721K(139444K), 0.0053300 secs]
>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>> 761.223: [CMS-concurrent-sweep-start]
>> 761.225: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 761.225: [CMS-concurrent-reset-start]
>> 761.234: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 763.234: [GC [1 CMS-initial-mark: 12849K(21428K)] 55849K(139444K),
>> 0.0045400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 763.239: [CMS-concurrent-mark-start]
>> 763.253: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 763.253: [CMS-concurrent-preclean-start]
>> 763.253: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 763.253: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 768.348:
>> [CMS-concurrent-abortable-preclean: 0.690/5.095 secs] [Times:
>> user=0.69 sys=0.00, real=5.10 secs]
>> 768.349: [GC[YG occupancy: 43320 K (118016 K)]768.349: [Rescan
>> (parallel) , 0.0045140 secs]768.353: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 56169K(139444K), 0.0046170 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 768.353: [CMS-concurrent-sweep-start]
>> 768.356: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 768.356: [CMS-concurrent-reset-start]
>> 768.365: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 770.365: [GC [1 CMS-initial-mark: 12849K(21428K)] 56298K(139444K),
>> 0.0063950 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 770.372: [CMS-concurrent-mark-start]
>> 770.388: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 770.388: [CMS-concurrent-preclean-start]
>> 770.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 770.388: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 775.400:
>> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 775.401: [GC[YG occupancy: 43768 K (118016 K)]775.401: [Rescan
>> (parallel) , 0.0043990 secs]775.405: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 56618K(139444K), 0.0045000 secs]
>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>> 775.405: [CMS-concurrent-sweep-start]
>> 775.407: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 775.407: [CMS-concurrent-reset-start]
>> 775.417: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 777.417: [GC [1 CMS-initial-mark: 12849K(21428K)] 56746K(139444K),
>> 0.0064580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 777.423: [CMS-concurrent-mark-start]
>> 777.438: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 777.438: [CMS-concurrent-preclean-start]
>> 777.439: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 777.439: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 782.448:
>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 782.448: [GC[YG occupancy: 44321 K (118016 K)]782.448: [Rescan
>> (parallel) , 0.0054760 secs]782.454: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 57171K(139444K), 0.0055780 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 782.454: [CMS-concurrent-sweep-start]
>> 782.455: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 782.455: [CMS-concurrent-reset-start]
>> 782.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 782.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 57235K(139444K),
>> 0.0066970 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 782.550: [CMS-concurrent-mark-start]
>> 782.567: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 782.567: [CMS-concurrent-preclean-start]
>> 782.568: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 782.568: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 787.574:
>> [CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 787.574: [GC[YG occupancy: 44746 K (118016 K)]787.574: [Rescan
>> (parallel) , 0.0049170 secs]787.579: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 57596K(139444K), 0.0050210 secs]
>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>> 787.579: [CMS-concurrent-sweep-start]
>> 787.581: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 787.581: [CMS-concurrent-reset-start]
>> 787.590: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 789.591: [GC [1 CMS-initial-mark: 12849K(21428K)] 57724K(139444K),
>> 0.0066850 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 789.598: [CMS-concurrent-mark-start]
>> 789.614: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 789.614: [CMS-concurrent-preclean-start]
>> 789.615: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 789.615: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 794.626:
>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 794.627: [GC[YG occupancy: 45195 K (118016 K)]794.627: [Rescan
>> (parallel) , 0.0056520 secs]794.632: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 58044K(139444K), 0.0057510 secs]
>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>> 794.632: [CMS-concurrent-sweep-start]
>> 794.634: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 794.634: [CMS-concurrent-reset-start]
>> 794.643: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 796.643: [GC [1 CMS-initial-mark: 12849K(21428K)] 58172K(139444K),
>> 0.0067410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 796.650: [CMS-concurrent-mark-start]
>> 796.666: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 796.666: [CMS-concurrent-preclean-start]
>> 796.667: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 796.667: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 801.670:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 801.670: [GC[YG occupancy: 45643 K (118016 K)]801.670: [Rescan
>> (parallel) , 0.0043550 secs]801.675: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 58493K(139444K), 0.0044580 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 801.675: [CMS-concurrent-sweep-start]
>> 801.677: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 801.677: [CMS-concurrent-reset-start]
>> 801.686: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 803.686: [GC [1 CMS-initial-mark: 12849K(21428K)] 58621K(139444K),
>> 0.0067250 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 803.693: [CMS-concurrent-mark-start]
>> 803.708: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 803.708: [CMS-concurrent-preclean-start]
>> 803.709: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 803.709: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 808.717:
>> [CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 808.717: [GC[YG occupancy: 46091 K (118016 K)]808.717: [Rescan
>> (parallel) , 0.0034790 secs]808.720: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 58941K(139444K), 0.0035820 secs]
>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>> 808.721: [CMS-concurrent-sweep-start]
>> 808.722: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 808.722: [CMS-concurrent-reset-start]
>> 808.730: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 810.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 59069K(139444K),
>> 0.0067580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 810.738: [CMS-concurrent-mark-start]
>> 810.755: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 810.755: [CMS-concurrent-preclean-start]
>> 810.755: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 810.755: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 815.823:
>> [CMS-concurrent-abortable-preclean: 0.715/5.068 secs] [Times:
>> user=0.72 sys=0.00, real=5.06 secs]
>> 815.824: [GC[YG occupancy: 46580 K (118016 K)]815.824: [Rescan
>> (parallel) , 0.0048490 secs]815.829: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 59430K(139444K), 0.0049600 secs]
>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>> 815.829: [CMS-concurrent-sweep-start]
>> 815.831: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 815.831: [CMS-concurrent-reset-start]
>> 815.840: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 817.840: [GC [1 CMS-initial-mark: 12849K(21428K)] 59558K(139444K),
>> 0.0068880 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 817.847: [CMS-concurrent-mark-start]
>> 817.864: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 817.864: [CMS-concurrent-preclean-start]
>> 817.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 817.865: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 822.868:
>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>> user=0.69 sys=0.01, real=5.00 secs]
>> 822.868: [GC[YG occupancy: 47028 K (118016 K)]822.868: [Rescan
>> (parallel) , 0.0061120 secs]822.874: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 59878K(139444K), 0.0062150 secs]
>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>> 822.874: [CMS-concurrent-sweep-start]
>> 822.876: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 822.876: [CMS-concurrent-reset-start]
>> 822.885: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 824.885: [GC [1 CMS-initial-mark: 12849K(21428K)] 60006K(139444K),
>> 0.0068610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 824.892: [CMS-concurrent-mark-start]
>> 824.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 824.908: [CMS-concurrent-preclean-start]
>> 824.908: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 824.908: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 829.914:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 829.915: [GC[YG occupancy: 47477 K (118016 K)]829.915: [Rescan
>> (parallel) , 0.0034890 secs]829.918: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 60327K(139444K), 0.0035930 secs]
>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>> 829.918: [CMS-concurrent-sweep-start]
>> 829.920: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 829.920: [CMS-concurrent-reset-start]
>> 829.930: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 831.930: [GC [1 CMS-initial-mark: 12849K(21428K)] 60455K(139444K),
>> 0.0069040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 831.937: [CMS-concurrent-mark-start]
>> 831.953: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 831.953: [CMS-concurrent-preclean-start]
>> 831.954: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 831.954: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 836.957:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.71 sys=0.00, real=5.00 secs]
>> 836.957: [GC[YG occupancy: 47925 K (118016 K)]836.957: [Rescan
>> (parallel) , 0.0060440 secs]836.963: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 60775K(139444K), 0.0061520 secs]
>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>> 836.964: [CMS-concurrent-sweep-start]
>> 836.965: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 836.965: [CMS-concurrent-reset-start]
>> 836.974: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 838.974: [GC [1 CMS-initial-mark: 12849K(21428K)] 60903K(139444K),
>> 0.0069860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 838.982: [CMS-concurrent-mark-start]
>> 838.997: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 838.998: [CMS-concurrent-preclean-start]
>> 838.998: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 838.998: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 844.091:
>> [CMS-concurrent-abortable-preclean: 0.718/5.093 secs] [Times:
>> user=0.72 sys=0.00, real=5.09 secs]
>> 844.092: [GC[YG occupancy: 48731 K (118016 K)]844.092: [Rescan
>> (parallel) , 0.0052610 secs]844.097: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 61581K(139444K), 0.0053620 secs]
>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>> 844.097: [CMS-concurrent-sweep-start]
>> 844.099: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 844.099: [CMS-concurrent-reset-start]
>> 844.108: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 846.109: [GC [1 CMS-initial-mark: 12849K(21428K)] 61709K(139444K),
>> 0.0071980 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 846.116: [CMS-concurrent-mark-start]
>> 846.133: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 846.133: [CMS-concurrent-preclean-start]
>> 846.134: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 846.134: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 851.137:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 851.137: [GC[YG occupancy: 49180 K (118016 K)]851.137: [Rescan
>> (parallel) , 0.0061320 secs]851.143: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 62030K(139444K), 0.0062320 secs]
>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>> 851.144: [CMS-concurrent-sweep-start]
>> 851.145: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 851.145: [CMS-concurrent-reset-start]
>> 851.154: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 853.154: [GC [1 CMS-initial-mark: 12849K(21428K)] 62158K(139444K),
>> 0.0071610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 853.162: [CMS-concurrent-mark-start]
>> 853.177: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 853.177: [CMS-concurrent-preclean-start]
>> 853.178: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 853.178: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 858.181:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 858.181: [GC[YG occupancy: 49628 K (118016 K)]858.181: [Rescan
>> (parallel) , 0.0029560 secs]858.184: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 62478K(139444K), 0.0030590 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 858.184: [CMS-concurrent-sweep-start]
>> 858.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 858.186: [CMS-concurrent-reset-start]
>> 858.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 860.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 62606K(139444K),
>> 0.0072070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 860.203: [CMS-concurrent-mark-start]
>> 860.219: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 860.219: [CMS-concurrent-preclean-start]
>> 860.219: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 860.219: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 865.226:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 865.227: [GC[YG occupancy: 50076 K (118016 K)]865.227: [Rescan
>> (parallel) , 0.0066610 secs]865.233: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 62926K(139444K), 0.0067670 secs]
>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>> 865.233: [CMS-concurrent-sweep-start]
>> 865.235: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 865.235: [CMS-concurrent-reset-start]
>> 865.244: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 867.244: [GC [1 CMS-initial-mark: 12849K(21428K)] 63054K(139444K),
>> 0.0072490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 867.252: [CMS-concurrent-mark-start]
>> 867.267: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 867.267: [CMS-concurrent-preclean-start]
>> 867.268: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 867.268: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 872.281:
>> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 872.281: [GC[YG occupancy: 50525 K (118016 K)]872.281: [Rescan
>> (parallel) , 0.0053780 secs]872.286: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 63375K(139444K), 0.0054790 secs]
>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>> 872.287: [CMS-concurrent-sweep-start]
>> 872.288: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 872.288: [CMS-concurrent-reset-start]
>> 872.296: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 872.572: [GC [1 CMS-initial-mark: 12849K(21428K)] 63439K(139444K),
>> 0.0073060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 872.580: [CMS-concurrent-mark-start]
>> 872.597: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 872.597: [CMS-concurrent-preclean-start]
>> 872.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 872.597: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 877.600:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 877.601: [GC[YG occupancy: 51049 K (118016 K)]877.601: [Rescan
>> (parallel) , 0.0063070 secs]877.607: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 63899K(139444K), 0.0064090 secs]
>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>> 877.607: [CMS-concurrent-sweep-start]
>> 877.609: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 877.609: [CMS-concurrent-reset-start]
>> 877.619: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 879.619: [GC [1 CMS-initial-mark: 12849K(21428K)] 64027K(139444K),
>> 0.0073320 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 879.626: [CMS-concurrent-mark-start]
>> 879.643: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 879.643: [CMS-concurrent-preclean-start]
>> 879.644: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 879.644: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 884.657:
>> [CMS-concurrent-abortable-preclean: 0.708/5.014 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 884.658: [GC[YG occupancy: 51497 K (118016 K)]884.658: [Rescan
>> (parallel) , 0.0056160 secs]884.663: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 64347K(139444K), 0.0057150 secs]
>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>> 884.663: [CMS-concurrent-sweep-start]
>> 884.665: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 884.665: [CMS-concurrent-reset-start]
>> 884.674: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 886.674: [GC [1 CMS-initial-mark: 12849K(21428K)] 64475K(139444K),
>> 0.0073420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 886.682: [CMS-concurrent-mark-start]
>> 886.698: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 886.698: [CMS-concurrent-preclean-start]
>> 886.698: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 886.698: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 891.702:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 891.702: [GC[YG occupancy: 51945 K (118016 K)]891.702: [Rescan
>> (parallel) , 0.0070120 secs]891.709: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 64795K(139444K), 0.0071150 secs]
>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>> 891.709: [CMS-concurrent-sweep-start]
>> 891.711: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 891.711: [CMS-concurrent-reset-start]
>> 891.721: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 893.721: [GC [1 CMS-initial-mark: 12849K(21428K)] 64923K(139444K),
>> 0.0073880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 893.728: [CMS-concurrent-mark-start]
>> 893.745: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 893.745: [CMS-concurrent-preclean-start]
>> 893.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 893.745: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 898.852:
>> [CMS-concurrent-abortable-preclean: 0.715/5.107 secs] [Times:
>> user=0.71 sys=0.00, real=5.10 secs]
>> 898.853: [GC[YG occupancy: 53466 K (118016 K)]898.853: [Rescan
>> (parallel) , 0.0060600 secs]898.859: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 66315K(139444K), 0.0061640 secs]
>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>> 898.859: [CMS-concurrent-sweep-start]
>> 898.861: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 898.861: [CMS-concurrent-reset-start]
>> 898.870: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 900.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 66444K(139444K),
>> 0.0074670 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 900.878: [CMS-concurrent-mark-start]
>> 900.895: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 900.895: [CMS-concurrent-preclean-start]
>> 900.896: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 900.896: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 905.969:
>> [CMS-concurrent-abortable-preclean: 0.716/5.074 secs] [Times:
>> user=0.72 sys=0.01, real=5.07 secs]
>> 905.969: [GC[YG occupancy: 54157 K (118016 K)]905.970: [Rescan
>> (parallel) , 0.0068200 secs]905.976: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 67007K(139444K), 0.0069250 secs]
>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>> 905.977: [CMS-concurrent-sweep-start]
>> 905.978: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 905.978: [CMS-concurrent-reset-start]
>> 905.986: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 907.986: [GC [1 CMS-initial-mark: 12849K(21428K)] 67135K(139444K),
>> 0.0076010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 907.994: [CMS-concurrent-mark-start]
>> 908.009: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 908.009: [CMS-concurrent-preclean-start]
>> 908.010: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 908.010: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 913.013:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.69 sys=0.01, real=5.00 secs]
>> 913.013: [GC[YG occupancy: 54606 K (118016 K)]913.013: [Rescan
>> (parallel) , 0.0053650 secs]913.018: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 67455K(139444K), 0.0054650 secs]
>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>> 913.019: [CMS-concurrent-sweep-start]
>> 913.021: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 913.021: [CMS-concurrent-reset-start]
>> 913.030: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 915.030: [GC [1 CMS-initial-mark: 12849K(21428K)] 67583K(139444K),
>> 0.0076410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 915.038: [CMS-concurrent-mark-start]
>> 915.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 915.055: [CMS-concurrent-preclean-start]
>> 915.056: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 915.056: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 920.058:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 920.058: [GC[YG occupancy: 55054 K (118016 K)]920.058: [Rescan
>> (parallel) , 0.0058380 secs]920.064: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 67904K(139444K), 0.0059420 secs]
>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>> 920.064: [CMS-concurrent-sweep-start]
>> 920.066: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 920.066: [CMS-concurrent-reset-start]
>> 920.075: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.01, real=0.01 secs]
>> 922.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 68032K(139444K),
>> 0.0075820 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 922.083: [CMS-concurrent-mark-start]
>> 922.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 922.098: [CMS-concurrent-preclean-start]
>> 922.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 922.099: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 927.102:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 927.102: [GC[YG occupancy: 55502 K (118016 K)]927.102: [Rescan
>> (parallel) , 0.0059190 secs]927.108: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 68352K(139444K), 0.0060220 secs]
>> [Times: user=0.06 sys=0.01, real=0.01 secs]
>> 927.108: [CMS-concurrent-sweep-start]
>> 927.110: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 927.110: [CMS-concurrent-reset-start]
>> 927.120: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 929.120: [GC [1 CMS-initial-mark: 12849K(21428K)] 68480K(139444K),
>> 0.0077620 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 929.128: [CMS-concurrent-mark-start]
>> 929.145: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 929.145: [CMS-concurrent-preclean-start]
>> 929.145: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 929.145: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 934.237:
>> [CMS-concurrent-abortable-preclean: 0.717/5.092 secs] [Times:
>> user=0.72 sys=0.00, real=5.09 secs]
>> 934.238: [GC[YG occupancy: 55991 K (118016 K)]934.238: [Rescan
>> (parallel) , 0.0042660 secs]934.242: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 68841K(139444K), 0.0043660 secs]
>> [Times: user=0.05 sys=0.00, real=0.00 secs]
>> 934.242: [CMS-concurrent-sweep-start]
>> 934.244: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 934.244: [CMS-concurrent-reset-start]
>> 934.252: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 936.253: [GC [1 CMS-initial-mark: 12849K(21428K)] 68969K(139444K),
>> 0.0077340 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 936.261: [CMS-concurrent-mark-start]
>> 936.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 936.277: [CMS-concurrent-preclean-start]
>> 936.278: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 936.278: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 941.284:
>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 941.284: [GC[YG occupancy: 56439 K (118016 K)]941.284: [Rescan
>> (parallel) , 0.0059460 secs]941.290: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 69289K(139444K), 0.0060470 secs]
>> [Times: user=0.08 sys=0.00, real=0.00 secs]
>> 941.290: [CMS-concurrent-sweep-start]
>> 941.293: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 941.293: [CMS-concurrent-reset-start]
>> 941.302: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 943.302: [GC [1 CMS-initial-mark: 12849K(21428K)] 69417K(139444K),
>> 0.0077760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 943.310: [CMS-concurrent-mark-start]
>> 943.326: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 943.326: [CMS-concurrent-preclean-start]
>> 943.327: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 943.327: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 948.340:
>> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 948.340: [GC[YG occupancy: 56888 K (118016 K)]948.340: [Rescan
>> (parallel) , 0.0047760 secs]948.345: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 69738K(139444K), 0.0048770 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 948.345: [CMS-concurrent-sweep-start]
>> 948.347: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 948.347: [CMS-concurrent-reset-start]
>> 948.356: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 950.356: [GC [1 CMS-initial-mark: 12849K(21428K)] 69866K(139444K),
>> 0.0077750 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 950.364: [CMS-concurrent-mark-start]
>> 950.380: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 950.380: [CMS-concurrent-preclean-start]
>> 950.380: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 950.380: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 955.384:
>> [CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 955.384: [GC[YG occupancy: 57336 K (118016 K)]955.384: [Rescan
>> (parallel) , 0.0072540 secs]955.392: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 70186K(139444K), 0.0073540 secs]
>> [Times: user=0.08 sys=0.00, real=0.00 secs]
>> 955.392: [CMS-concurrent-sweep-start]
>> 955.394: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 955.394: [CMS-concurrent-reset-start]
>> 955.403: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 957.403: [GC [1 CMS-initial-mark: 12849K(21428K)] 70314K(139444K),
>> 0.0078120 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 957.411: [CMS-concurrent-mark-start]
>> 957.427: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 957.427: [CMS-concurrent-preclean-start]
>> 957.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 957.427: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 962.437:
>> [CMS-concurrent-abortable-preclean: 0.704/5.010 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 962.437: [GC[YG occupancy: 57889 K (118016 K)]962.437: [Rescan
>> (parallel) , 0.0076140 secs]962.445: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 70739K(139444K), 0.0077160 secs]
>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>> 962.445: [CMS-concurrent-sweep-start]
>> 962.446: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 962.446: [CMS-concurrent-reset-start]
>> 962.456: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 962.599: [GC [1 CMS-initial-mark: 12849K(21428K)] 70827K(139444K),
>> 0.0081180 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 962.608: [CMS-concurrent-mark-start]
>> 962.626: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 962.626: [CMS-concurrent-preclean-start]
>> 962.626: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 962.626: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 967.632:
>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 967.632: [GC[YG occupancy: 58338 K (118016 K)]967.632: [Rescan
>> (parallel) , 0.0061170 secs]967.638: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 71188K(139444K), 0.0062190 secs]
>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>> 967.638: [CMS-concurrent-sweep-start]
>> 967.640: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 967.640: [CMS-concurrent-reset-start]
>> 967.648: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 969.648: [GC [1 CMS-initial-mark: 12849K(21428K)] 71316K(139444K),
>> 0.0081110 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 969.656: [CMS-concurrent-mark-start]
>> 969.674: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 969.674: [CMS-concurrent-preclean-start]
>> 969.674: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 969.674: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 974.677:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 974.677: [GC[YG occupancy: 58786 K (118016 K)]974.677: [Rescan
>> (parallel) , 0.0070810 secs]974.685: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 71636K(139444K), 0.0072050 secs]
>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>> 974.685: [CMS-concurrent-sweep-start]
>> 974.686: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 974.686: [CMS-concurrent-reset-start]
>> 974.695: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 976.696: [GC [1 CMS-initial-mark: 12849K(21428K)] 71764K(139444K),
>> 0.0080650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 976.704: [CMS-concurrent-mark-start]
>> 976.719: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 976.719: [CMS-concurrent-preclean-start]
>> 976.719: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 976.719: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 981.727:
>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>> user=0.69 sys=0.01, real=5.01 secs]
>> 981.727: [GC[YG occupancy: 59235 K (118016 K)]981.727: [Rescan
>> (parallel) , 0.0066570 secs]981.734: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 72085K(139444K), 0.0067620 secs]
>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>> 981.734: [CMS-concurrent-sweep-start]
>> 981.736: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 981.736: [CMS-concurrent-reset-start]
>> 981.745: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 983.745: [GC [1 CMS-initial-mark: 12849K(21428K)] 72213K(139444K),
>> 0.0081400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 983.753: [CMS-concurrent-mark-start]
>> 983.769: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 983.769: [CMS-concurrent-preclean-start]
>> 983.769: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 983.769: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 988.840:
>> [CMS-concurrent-abortable-preclean: 0.716/5.071 secs] [Times:
>> user=0.71 sys=0.00, real=5.07 secs]
>> 988.840: [GC[YG occupancy: 59683 K (118016 K)]988.840: [Rescan
>> (parallel) , 0.0076020 secs]988.848: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 72533K(139444K), 0.0077100 secs]
>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>> 988.848: [CMS-concurrent-sweep-start]
>> 988.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 988.850: [CMS-concurrent-reset-start]
>> 988.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 990.858: [GC [1 CMS-initial-mark: 12849K(21428K)] 72661K(139444K),
>> 0.0081810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 990.867: [CMS-concurrent-mark-start]
>> 990.884: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 990.884: [CMS-concurrent-preclean-start]
>> 990.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 990.885: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 995.999:
>> [CMS-concurrent-abortable-preclean: 0.721/5.114 secs] [Times:
>> user=0.73 sys=0.00, real=5.11 secs]
>> 995.999: [GC[YG occupancy: 60307 K (118016 K)]995.999: [Rescan
>> (parallel) , 0.0058190 secs]996.005: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 73156K(139444K), 0.0059260 secs]
>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>> 996.005: [CMS-concurrent-sweep-start]
>> 996.007: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 996.007: [CMS-concurrent-reset-start]
>> 996.016: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 998.016: [GC [1 CMS-initial-mark: 12849K(21428K)] 73285K(139444K),
>> 0.0052760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 998.022: [CMS-concurrent-mark-start]
>> 998.038: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 998.038: [CMS-concurrent-preclean-start]
>> 998.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 998.039: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1003.048:
>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1003.048: [GC[YG occupancy: 60755 K (118016 K)]1003.048: [Rescan
>> (parallel) , 0.0068040 secs]1003.055: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 73605K(139444K), 0.0069060 secs]
>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>> 1003.055: [CMS-concurrent-sweep-start]
>> 1003.057: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1003.057: [CMS-concurrent-reset-start]
>> 1003.066: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1005.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 73733K(139444K),
>> 0.0082200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1005.075: [CMS-concurrent-mark-start]
>> 1005.090: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1005.090: [CMS-concurrent-preclean-start]
>> 1005.090: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1005.090: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1010.094:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 1010.094: [GC[YG occupancy: 61203 K (118016 K)]1010.094: [Rescan
>> (parallel) , 0.0066010 secs]1010.101: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 74053K(139444K), 0.0067120 secs]
>> [Times: user=0.08 sys=0.00, real=0.00 secs]
>> 1010.101: [CMS-concurrent-sweep-start]
>> 1010.103: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1010.103: [CMS-concurrent-reset-start]
>> 1010.112: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1012.113: [GC [1 CMS-initial-mark: 12849K(21428K)] 74181K(139444K),
>> 0.0083460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1012.121: [CMS-concurrent-mark-start]
>> 1012.137: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1012.137: [CMS-concurrent-preclean-start]
>> 1012.138: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1012.138: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1017.144:
>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 1017.144: [GC[YG occupancy: 61651 K (118016 K)]1017.144: [Rescan
>> (parallel) , 0.0058810 secs]1017.150: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 74501K(139444K), 0.0059830 secs]
>> [Times: user=0.06 sys=0.00, real=0.00 secs]
>> 1017.151: [CMS-concurrent-sweep-start]
>> 1017.153: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1017.153: [CMS-concurrent-reset-start]
>> 1017.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1019.162: [GC [1 CMS-initial-mark: 12849K(21428K)] 74629K(139444K),
>> 0.0083310 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1019.171: [CMS-concurrent-mark-start]
>> 1019.187: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1019.187: [CMS-concurrent-preclean-start]
>> 1019.187: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1019.187: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1024.261:
>> [CMS-concurrent-abortable-preclean: 0.717/5.074 secs] [Times:
>> user=0.72 sys=0.00, real=5.07 secs]
>> 1024.261: [GC[YG occupancy: 62351 K (118016 K)]1024.262: [Rescan
>> (parallel) , 0.0069720 secs]1024.269: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 75200K(139444K), 0.0070750 secs]
>> [Times: user=0.08 sys=0.01, real=0.01 secs]
>> 1024.269: [CMS-concurrent-sweep-start]
>> 1024.270: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1024.270: [CMS-concurrent-reset-start]
>> 1024.278: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1026.279: [GC [1 CMS-initial-mark: 12849K(21428K)] 75329K(139444K),
>> 0.0086360 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1026.288: [CMS-concurrent-mark-start]
>> 1026.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1026.305: [CMS-concurrent-preclean-start]
>> 1026.305: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1026.305: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1031.308:
>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1031.308: [GC[YG occupancy: 62799 K (118016 K)]1031.308: [Rescan
>> (parallel) , 0.0069330 secs]1031.315: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 75649K(139444K), 0.0070380 secs]
>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>> 1031.315: [CMS-concurrent-sweep-start]
>> 1031.316: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1031.316: [CMS-concurrent-reset-start]
>> 1031.326: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1033.326: [GC [1 CMS-initial-mark: 12849K(21428K)] 75777K(139444K),
>> 0.0085850 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1033.335: [CMS-concurrent-mark-start]
>> 1033.350: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1033.350: [CMS-concurrent-preclean-start]
>> 1033.351: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1033.351: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1038.357:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.69 sys=0.01, real=5.01 secs]
>> 1038.358: [GC[YG occupancy: 63247 K (118016 K)]1038.358: [Rescan
>> (parallel) , 0.0071860 secs]1038.365: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 76097K(139444K), 0.0072900 secs]
>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>> 1038.365: [CMS-concurrent-sweep-start]
>> 1038.367: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1038.367: [CMS-concurrent-reset-start]
>> 1038.376: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1040.376: [GC [1 CMS-initial-mark: 12849K(21428K)] 76225K(139444K),
>> 0.0085910 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1040.385: [CMS-concurrent-mark-start]
>> 1040.401: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1040.401: [CMS-concurrent-preclean-start]
>> 1040.401: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1040.401: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1045.411:
>> [CMS-concurrent-abortable-preclean: 0.705/5.010 secs] [Times:
>> user=0.69 sys=0.01, real=5.01 secs]
>> 1045.412: [GC[YG occupancy: 63695 K (118016 K)]1045.412: [Rescan
>> (parallel) , 0.0082050 secs]1045.420: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 76545K(139444K), 0.0083110 secs]
>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>> 1045.420: [CMS-concurrent-sweep-start]
>> 1045.421: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1045.421: [CMS-concurrent-reset-start]
>> 1045.430: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1047.430: [GC [1 CMS-initial-mark: 12849K(21428K)] 76673K(139444K),
>> 0.0086110 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1047.439: [CMS-concurrent-mark-start]
>> 1047.456: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1047.456: [CMS-concurrent-preclean-start]
>> 1047.456: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1047.456: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1052.462:
>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 1052.462: [GC[YG occupancy: 64144 K (118016 K)]1052.462: [Rescan
>> (parallel) , 0.0087770 secs]1052.471: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 76994K(139444K), 0.0088770 secs]
>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>> 1052.471: [CMS-concurrent-sweep-start]
>> 1052.472: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1052.472: [CMS-concurrent-reset-start]
>> 1052.481: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1052.628: [GC [1 CMS-initial-mark: 12849K(21428K)] 77058K(139444K),
>> 0.0086170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1052.637: [CMS-concurrent-mark-start]
>> 1052.655: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1052.655: [CMS-concurrent-preclean-start]
>> 1052.656: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1052.656: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1057.658:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1057.658: [GC[YG occupancy: 64569 K (118016 K)]1057.658: [Rescan
>> (parallel) , 0.0072850 secs]1057.665: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 77418K(139444K), 0.0073880 secs]
>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>> 1057.666: [CMS-concurrent-sweep-start]
>> 1057.668: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1057.668: [CMS-concurrent-reset-start]
>> 1057.677: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1059.677: [GC [1 CMS-initial-mark: 12849K(21428K)] 77547K(139444K),
>> 0.0086820 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1059.686: [CMS-concurrent-mark-start]
>> 1059.703: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1059.703: [CMS-concurrent-preclean-start]
>> 1059.703: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1059.703: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1064.712:
>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1064.712: [GC[YG occupancy: 65017 K (118016 K)]1064.712: [Rescan
>> (parallel) , 0.0071630 secs]1064.720: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 77867K(139444K), 0.0072700 secs]
>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>> 1064.720: [CMS-concurrent-sweep-start]
>> 1064.722: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1064.722: [CMS-concurrent-reset-start]
>> 1064.731: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1066.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 77995K(139444K),
>> 0.0087640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1066.740: [CMS-concurrent-mark-start]
>> 1066.757: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1066.757: [CMS-concurrent-preclean-start]
>> 1066.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1066.757: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1071.821:
>> [CMS-concurrent-abortable-preclean: 0.714/5.064 secs] [Times:
>> user=0.71 sys=0.00, real=5.06 secs]
>> 1071.822: [GC[YG occupancy: 65465 K (118016 K)]1071.822: [Rescan
>> (parallel) , 0.0056280 secs]1071.827: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 78315K(139444K), 0.0057430 secs]
>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>> 1071.828: [CMS-concurrent-sweep-start]
>> 1071.830: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1071.830: [CMS-concurrent-reset-start]
>> 1071.839: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1073.839: [GC [1 CMS-initial-mark: 12849K(21428K)] 78443K(139444K),
>> 0.0087570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1073.848: [CMS-concurrent-mark-start]
>> 1073.865: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1073.865: [CMS-concurrent-preclean-start]
>> 1073.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1073.865: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1078.868:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1078.868: [GC[YG occupancy: 65914 K (118016 K)]1078.868: [Rescan
>> (parallel) , 0.0055280 secs]1078.873: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 78763K(139444K), 0.0056320 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 1078.874: [CMS-concurrent-sweep-start]
>> 1078.875: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1078.875: [CMS-concurrent-reset-start]
>> 1078.884: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1080.884: [GC [1 CMS-initial-mark: 12849K(21428K)] 78892K(139444K),
>> 0.0088520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1080.893: [CMS-concurrent-mark-start]
>> 1080.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1080.909: [CMS-concurrent-preclean-start]
>> 1080.909: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1080.909: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1086.021:
>> [CMS-concurrent-abortable-preclean: 0.714/5.112 secs] [Times:
>> user=0.72 sys=0.00, real=5.11 secs]
>> 1086.021: [GC[YG occupancy: 66531 K (118016 K)]1086.022: [Rescan
>> (parallel) , 0.0075330 secs]1086.029: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 79381K(139444K), 0.0076440 secs]
>> [Times: user=0.09 sys=0.01, real=0.01 secs]
>> 1086.029: [CMS-concurrent-sweep-start]
>> 1086.031: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1086.031: [CMS-concurrent-reset-start]
>> 1086.041: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1088.041: [GC [1 CMS-initial-mark: 12849K(21428K)] 79509K(139444K),
>> 0.0091350 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1088.050: [CMS-concurrent-mark-start]
>> 1088.066: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1088.067: [CMS-concurrent-preclean-start]
>> 1088.067: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1088.067: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1093.070:
>> [CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1093.071: [GC[YG occupancy: 66980 K (118016 K)]1093.071: [Rescan
>> (parallel) , 0.0051870 secs]1093.076: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 79830K(139444K), 0.0052930 secs]
>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>> 1093.076: [CMS-concurrent-sweep-start]
>> 1093.078: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1093.078: [CMS-concurrent-reset-start]
>> 1093.087: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1095.088: [GC [1 CMS-initial-mark: 12849K(21428K)] 79958K(139444K),
>> 0.0091350 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1095.097: [CMS-concurrent-mark-start]
>> 1095.114: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1095.114: [CMS-concurrent-preclean-start]
>> 1095.115: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1095.115: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1100.121:
>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>> user=0.69 sys=0.01, real=5.00 secs]
>> 1100.121: [GC[YG occupancy: 67428 K (118016 K)]1100.122: [Rescan
>> (parallel) , 0.0068510 secs]1100.128: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 80278K(139444K), 0.0069510 secs]
>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>> 1100.129: [CMS-concurrent-sweep-start]
>> 1100.130: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1100.130: [CMS-concurrent-reset-start]
>> 1100.138: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1102.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 80406K(139444K),
>> 0.0090760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1102.148: [CMS-concurrent-mark-start]
>> 1102.165: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1102.165: [CMS-concurrent-preclean-start]
>> 1102.165: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1102.165: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1107.168:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1107.168: [GC[YG occupancy: 67876 K (118016 K)]1107.168: [Rescan
>> (parallel) , 0.0076420 secs]1107.176: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 80726K(139444K), 0.0077500 secs]
>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>> 1107.176: [CMS-concurrent-sweep-start]
>> 1107.178: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1107.178: [CMS-concurrent-reset-start]
>> 1107.187: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1109.188: [GC [1 CMS-initial-mark: 12849K(21428K)] 80854K(139444K),
>> 0.0091510 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1109.197: [CMS-concurrent-mark-start]
>> 1109.214: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1109.214: [CMS-concurrent-preclean-start]
>> 1109.214: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1109.214: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1114.290:
>> [CMS-concurrent-abortable-preclean: 0.711/5.076 secs] [Times:
>> user=0.72 sys=0.00, real=5.07 secs]
>> 1114.290: [GC[YG occupancy: 68473 K (118016 K)]1114.290: [Rescan
>> (parallel) , 0.0084730 secs]1114.299: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 81322K(139444K), 0.0085810 secs]
>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>> 1114.299: [CMS-concurrent-sweep-start]
>> 1114.301: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1114.301: [CMS-concurrent-reset-start]
>> 1114.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1115.803: [GC [1 CMS-initial-mark: 12849K(21428K)] 81451K(139444K),
>> 0.0106050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1115.814: [CMS-concurrent-mark-start]
>> 1115.830: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1115.830: [CMS-concurrent-preclean-start]
>> 1115.831: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1115.831: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1120.839:
>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1120.839: [GC[YG occupancy: 68921 K (118016 K)]1120.839: [Rescan
>> (parallel) , 0.0088800 secs]1120.848: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 81771K(139444K), 0.0089910 secs]
>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>> 1120.848: [CMS-concurrent-sweep-start]
>> 1120.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1120.850: [CMS-concurrent-reset-start]
>> 1120.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1122.859: [GC [1 CMS-initial-mark: 12849K(21428K)] 81899K(139444K),
>> 0.0092280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1122.868: [CMS-concurrent-mark-start]
>> 1122.885: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1122.885: [CMS-concurrent-preclean-start]
>> 1122.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1122.885: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1127.888:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.69 sys=0.01, real=5.00 secs]
>> 1127.888: [GC[YG occupancy: 69369 K (118016 K)]1127.888: [Rescan
>> (parallel) , 0.0087740 secs]1127.897: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 82219K(139444K), 0.0088850 secs]
>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>> 1127.897: [CMS-concurrent-sweep-start]
>> 1127.898: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1127.898: [CMS-concurrent-reset-start]
>> 1127.906: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1129.907: [GC [1 CMS-initial-mark: 12849K(21428K)] 82347K(139444K),
>> 0.0092280 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1129.916: [CMS-concurrent-mark-start]
>> 1129.933: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1129.933: [CMS-concurrent-preclean-start]
>> 1129.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1129.934: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1134.938:
>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1134.938: [GC[YG occupancy: 69818 K (118016 K)]1134.939: [Rescan
>> (parallel) , 0.0078530 secs]1134.946: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 82667K(139444K), 0.0079630 secs]
>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>> 1134.947: [CMS-concurrent-sweep-start]
>> 1134.948: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1134.948: [CMS-concurrent-reset-start]
>> 1134.956: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1136.957: [GC [1 CMS-initial-mark: 12849K(21428K)] 82795K(139444K),
>> 0.0092760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1136.966: [CMS-concurrent-mark-start]
>> 1136.983: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.01 secs]
>> 1136.983: [CMS-concurrent-preclean-start]
>> 1136.984: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.01 secs]
>> 1136.984: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1141.991:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1141.991: [GC[YG occupancy: 70266 K (118016 K)]1141.991: [Rescan
>> (parallel) , 0.0090620 secs]1142.000: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 83116K(139444K), 0.0091700 secs]
>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>> 1142.000: [CMS-concurrent-sweep-start]
>> 1142.002: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1142.002: [CMS-concurrent-reset-start]
>> 1142.011: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1142.657: [GC [1 CMS-initial-mark: 12849K(21428K)] 83390K(139444K),
>> 0.0094330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1142.667: [CMS-concurrent-mark-start]
>> 1142.685: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1142.685: [CMS-concurrent-preclean-start]
>> 1142.686: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1142.686: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1147.688:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1147.688: [GC[YG occupancy: 70901 K (118016 K)]1147.688: [Rescan
>> (parallel) , 0.0081170 secs]1147.696: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 83751K(139444K), 0.0082390 secs]
>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>> 1147.697: [CMS-concurrent-sweep-start]
>> 1147.698: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1147.698: [CMS-concurrent-reset-start]
>> 1147.706: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1149.706: [GC [1 CMS-initial-mark: 12849K(21428K)] 83879K(139444K),
>> 0.0095560 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1149.716: [CMS-concurrent-mark-start]
>> 1149.734: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1149.734: [CMS-concurrent-preclean-start]
>> 1149.734: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1149.734: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1154.741:
>> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1154.741: [GC[YG occupancy: 71349 K (118016 K)]1154.741: [Rescan
>> (parallel) , 0.0090720 secs]1154.750: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 84199K(139444K), 0.0091780 secs]
>> [Times: user=0.10 sys=0.01, real=0.01 secs]
>> 1154.750: [CMS-concurrent-sweep-start]
>> 1154.752: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1154.752: [CMS-concurrent-reset-start]
>> 1154.762: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1155.021: [GC [1 CMS-initial-mark: 12849K(21428K)] 84199K(139444K),
>> 0.0094030 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1155.031: [CMS-concurrent-mark-start]
>> 1155.047: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1155.047: [CMS-concurrent-preclean-start]
>> 1155.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1155.047: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1160.056:
>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1160.056: [GC[YG occupancy: 71669 K (118016 K)]1160.056: [Rescan
>> (parallel) , 0.0056520 secs]1160.062: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 84519K(139444K), 0.0057790 secs]
>> [Times: user=0.07 sys=0.00, real=0.00 secs]
>> 1160.062: [CMS-concurrent-sweep-start]
>> 1160.064: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1160.064: [CMS-concurrent-reset-start]
>> 1160.073: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1162.074: [GC [1 CMS-initial-mark: 12849K(21428K)] 84647K(139444K),
>> 0.0095040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1162.083: [CMS-concurrent-mark-start]
>> 1162.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1162.098: [CMS-concurrent-preclean-start]
>> 1162.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1162.099: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1167.102:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1167.102: [GC[YG occupancy: 72118 K (118016 K)]1167.102: [Rescan
>> (parallel) , 0.0072180 secs]1167.110: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 84968K(139444K), 0.0073300 secs]
>> [Times: user=0.08 sys=0.00, real=0.01 secs]
>> 1167.110: [CMS-concurrent-sweep-start]
>> 1167.112: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1167.112: [CMS-concurrent-reset-start]
>> 1167.121: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1169.121: [GC [1 CMS-initial-mark: 12849K(21428K)] 85096K(139444K),
>> 0.0096940 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1169.131: [CMS-concurrent-mark-start]
>> 1169.147: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1169.147: [CMS-concurrent-preclean-start]
>> 1169.147: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1169.147: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1174.197:
>> [CMS-concurrent-abortable-preclean: 0.720/5.050 secs] [Times:
>> user=0.72 sys=0.01, real=5.05 secs]
>> 1174.198: [GC[YG occupancy: 72607 K (118016 K)]1174.198: [Rescan
>> (parallel) , 0.0064910 secs]1174.204: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 85456K(139444K), 0.0065940 secs]
>> [Times: user=0.06 sys=0.01, real=0.01 secs]
>> 1174.204: [CMS-concurrent-sweep-start]
>> 1174.206: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1174.206: [CMS-concurrent-reset-start]
>> 1174.215: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1176.215: [GC [1 CMS-initial-mark: 12849K(21428K)] 85585K(139444K),
>> 0.0095940 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1176.225: [CMS-concurrent-mark-start]
>> 1176.240: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1176.240: [CMS-concurrent-preclean-start]
>> 1176.241: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1176.241: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1181.244:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1181.244: [GC[YG occupancy: 73055 K (118016 K)]1181.244: [Rescan
>> (parallel) , 0.0093030 secs]1181.254: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 85905K(139444K), 0.0094040 secs]
>> [Times: user=0.09 sys=0.01, real=0.01 secs]
>> 1181.254: [CMS-concurrent-sweep-start]
>> 1181.256: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1181.256: [CMS-concurrent-reset-start]
>> 1181.265: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1183.266: [GC [1 CMS-initial-mark: 12849K(21428K)] 86033K(139444K),
>> 0.0096490 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1183.275: [CMS-concurrent-mark-start]
>> 1183.293: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
>> sys=0.00, real=0.02 secs]
>> 1183.293: [CMS-concurrent-preclean-start]
>> 1183.294: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1183.294: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1188.301:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1188.301: [GC[YG occupancy: 73503 K (118016 K)]1188.301: [Rescan
>> (parallel) , 0.0092610 secs]1188.310: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 86353K(139444K), 0.0093750 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1188.310: [CMS-concurrent-sweep-start]
>> 1188.312: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1188.312: [CMS-concurrent-reset-start]
>> 1188.320: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1190.321: [GC [1 CMS-initial-mark: 12849K(21428K)] 86481K(139444K),
>> 0.0097510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1190.331: [CMS-concurrent-mark-start]
>> 1190.347: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1190.347: [CMS-concurrent-preclean-start]
>> 1190.347: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1190.347: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1195.359:
>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1195.359: [GC[YG occupancy: 73952 K (118016 K)]1195.359: [Rescan
>> (parallel) , 0.0093210 secs]1195.368: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 86801K(139444K), 0.0094330 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1195.369: [CMS-concurrent-sweep-start]
>> 1195.370: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1195.370: [CMS-concurrent-reset-start]
>> 1195.378: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1196.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 88001K(139444K),
>> 0.0099870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1196.553: [CMS-concurrent-mark-start]
>> 1196.570: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1196.570: [CMS-concurrent-preclean-start]
>> 1196.570: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1196.570: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1201.574:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1201.574: [GC[YG occupancy: 75472 K (118016 K)]1201.574: [Rescan
>> (parallel) , 0.0096480 secs]1201.584: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 88322K(139444K), 0.0097500 secs]
>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>> 1201.584: [CMS-concurrent-sweep-start]
>> 1201.586: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1201.586: [CMS-concurrent-reset-start]
>> 1201.595: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1202.679: [GC [1 CMS-initial-mark: 12849K(21428K)] 88491K(139444K),
>> 0.0099400 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1202.690: [CMS-concurrent-mark-start]
>> 1202.708: [CMS-concurrent-mark: 0.016/0.019 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1202.708: [CMS-concurrent-preclean-start]
>> 1202.709: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1202.709: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1207.718:
>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1207.718: [GC[YG occupancy: 76109 K (118016 K)]1207.718: [Rescan
>> (parallel) , 0.0096360 secs]1207.727: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 88959K(139444K), 0.0097380 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1207.728: [CMS-concurrent-sweep-start]
>> 1207.729: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1207.729: [CMS-concurrent-reset-start]
>> 1207.737: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1209.738: [GC [1 CMS-initial-mark: 12849K(21428K)] 89087K(139444K),
>> 0.0099440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1209.748: [CMS-concurrent-mark-start]
>> 1209.765: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1209.765: [CMS-concurrent-preclean-start]
>> 1209.765: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1209.765: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1214.797:
>> [CMS-concurrent-abortable-preclean: 0.716/5.031 secs] [Times:
>> user=0.72 sys=0.00, real=5.03 secs]
>> 1214.797: [GC[YG occupancy: 76557 K (118016 K)]1214.797: [Rescan
>> (parallel) , 0.0096280 secs]1214.807: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 89407K(139444K), 0.0097320 secs]
>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>> 1214.807: [CMS-concurrent-sweep-start]
>> 1214.808: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1214.808: [CMS-concurrent-reset-start]
>> 1214.816: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1216.817: [GC [1 CMS-initial-mark: 12849K(21428K)] 89535K(139444K),
>> 0.0099640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1216.827: [CMS-concurrent-mark-start]
>> 1216.844: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1216.844: [CMS-concurrent-preclean-start]
>> 1216.844: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1216.844: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1221.847:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1221.847: [GC[YG occupancy: 77005 K (118016 K)]1221.847: [Rescan
>> (parallel) , 0.0061810 secs]1221.854: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 89855K(139444K), 0.0062950 secs]
>> [Times: user=0.07 sys=0.00, real=0.01 secs]
>> 1221.854: [CMS-concurrent-sweep-start]
>> 1221.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1221.855: [CMS-concurrent-reset-start]
>> 1221.864: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1223.865: [GC [1 CMS-initial-mark: 12849K(21428K)] 89983K(139444K),
>> 0.0100430 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1223.875: [CMS-concurrent-mark-start]
>> 1223.890: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1223.890: [CMS-concurrent-preclean-start]
>> 1223.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1223.891: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1228.899:
>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1228.899: [GC[YG occupancy: 77454 K (118016 K)]1228.899: [Rescan
>> (parallel) , 0.0095850 secs]1228.909: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 90304K(139444K), 0.0096960 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1228.909: [CMS-concurrent-sweep-start]
>> 1228.911: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1228.911: [CMS-concurrent-reset-start]
>> 1228.919: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1230.919: [GC [1 CMS-initial-mark: 12849K(21428K)] 90432K(139444K),
>> 0.0101360 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1230.930: [CMS-concurrent-mark-start]
>> 1230.946: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1230.946: [CMS-concurrent-preclean-start]
>> 1230.947: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1230.947: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1235.952:
>> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1235.952: [GC[YG occupancy: 77943 K (118016 K)]1235.952: [Rescan
>> (parallel) , 0.0084420 secs]1235.961: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 90793K(139444K), 0.0085450 secs]
>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>> 1235.961: [CMS-concurrent-sweep-start]
>> 1235.963: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1235.963: [CMS-concurrent-reset-start]
>> 1235.972: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1237.973: [GC [1 CMS-initial-mark: 12849K(21428K)] 90921K(139444K),
>> 0.0101280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1237.983: [CMS-concurrent-mark-start]
>> 1237.998: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1237.998: [CMS-concurrent-preclean-start]
>> 1237.999: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1237.999: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1243.008:
>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1243.008: [GC[YG occupancy: 78391 K (118016 K)]1243.008: [Rescan
>> (parallel) , 0.0090510 secs]1243.017: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 91241K(139444K), 0.0091560 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1243.017: [CMS-concurrent-sweep-start]
>> 1243.019: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1243.019: [CMS-concurrent-reset-start]
>> 1243.027: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1245.027: [GC [1 CMS-initial-mark: 12849K(21428K)] 91369K(139444K),
>> 0.0101080 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1245.038: [CMS-concurrent-mark-start]
>> 1245.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1245.055: [CMS-concurrent-preclean-start]
>> 1245.055: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1245.055: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1250.058:
>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1250.058: [GC[YG occupancy: 78839 K (118016 K)]1250.058: [Rescan
>> (parallel) , 0.0096920 secs]1250.068: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 91689K(139444K), 0.0098040 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1250.068: [CMS-concurrent-sweep-start]
>> 1250.070: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1250.070: [CMS-concurrent-reset-start]
>> 1250.078: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1252.078: [GC [1 CMS-initial-mark: 12849K(21428K)] 91817K(139444K),
>> 0.0102560 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1252.089: [CMS-concurrent-mark-start]
>> 1252.105: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1252.105: [CMS-concurrent-preclean-start]
>> 1252.106: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1252.106: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1257.113:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1257.113: [GC[YG occupancy: 79288 K (118016 K)]1257.113: [Rescan
>> (parallel) , 0.0089920 secs]1257.122: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 92137K(139444K), 0.0090960 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1257.122: [CMS-concurrent-sweep-start]
>> 1257.124: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1257.124: [CMS-concurrent-reset-start]
>> 1257.133: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1259.134: [GC [1 CMS-initial-mark: 12849K(21428K)] 92266K(139444K),
>> 0.0101720 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1259.144: [CMS-concurrent-mark-start]
>> 1259.159: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 1259.159: [CMS-concurrent-preclean-start]
>> 1259.159: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1259.159: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1264.229:
>> [CMS-concurrent-abortable-preclean: 0.716/5.070 secs] [Times:
>> user=0.72 sys=0.01, real=5.07 secs]
>> 1264.229: [GC[YG occupancy: 79881 K (118016 K)]1264.229: [Rescan
>> (parallel) , 0.0101320 secs]1264.240: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 92731K(139444K), 0.0102440 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1264.240: [CMS-concurrent-sweep-start]
>> 1264.241: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1264.241: [CMS-concurrent-reset-start]
>> 1264.250: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1266.250: [GC [1 CMS-initial-mark: 12849K(21428K)] 92859K(139444K),
>> 0.0105180 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1266.261: [CMS-concurrent-mark-start]
>> 1266.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1266.277: [CMS-concurrent-preclean-start]
>> 1266.277: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1266.277: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1271.285:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1271.285: [GC[YG occupancy: 80330 K (118016 K)]1271.285: [Rescan
>> (parallel) , 0.0094600 secs]1271.295: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 93180K(139444K), 0.0095600 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1271.295: [CMS-concurrent-sweep-start]
>> 1271.297: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1271.297: [CMS-concurrent-reset-start]
>> 1271.306: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1273.306: [GC [1 CMS-initial-mark: 12849K(21428K)] 93308K(139444K),
>> 0.0104100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1273.317: [CMS-concurrent-mark-start]
>> 1273.334: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1273.334: [CMS-concurrent-preclean-start]
>> 1273.335: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1273.335: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1278.341:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1278.341: [GC[YG occupancy: 80778 K (118016 K)]1278.341: [Rescan
>> (parallel) , 0.0101320 secs]1278.351: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 93628K(139444K), 0.0102460 secs]
>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>> 1278.351: [CMS-concurrent-sweep-start]
>> 1278.353: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1278.353: [CMS-concurrent-reset-start]
>> 1278.362: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1280.362: [GC [1 CMS-initial-mark: 12849K(21428K)] 93756K(139444K),
>> 0.0105680 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1280.373: [CMS-concurrent-mark-start]
>> 1280.388: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1280.388: [CMS-concurrent-preclean-start]
>> 1280.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1280.388: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1285.400:
>> [CMS-concurrent-abortable-preclean: 0.706/5.012 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1285.400: [GC[YG occupancy: 81262 K (118016 K)]1285.400: [Rescan
>> (parallel) , 0.0093660 secs]1285.410: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 94111K(139444K), 0.0094820 secs]
>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>> 1285.410: [CMS-concurrent-sweep-start]
>> 1285.411: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1285.411: [CMS-concurrent-reset-start]
>> 1285.420: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1287.420: [GC [1 CMS-initial-mark: 12849K(21428K)] 94240K(139444K),
>> 0.0105800 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1287.431: [CMS-concurrent-mark-start]
>> 1287.447: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1287.447: [CMS-concurrent-preclean-start]
>> 1287.447: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1287.447: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1292.460:
>> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1292.460: [GC[YG occupancy: 81710 K (118016 K)]1292.460: [Rescan
>> (parallel) , 0.0081130 secs]1292.468: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 94560K(139444K), 0.0082210 secs]
>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>> 1292.468: [CMS-concurrent-sweep-start]
>> 1292.470: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1292.470: [CMS-concurrent-reset-start]
>> 1292.480: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1292.712: [GC [1 CMS-initial-mark: 12849K(21428K)] 94624K(139444K),
>> 0.0104870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1292.723: [CMS-concurrent-mark-start]
>> 1292.739: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1292.739: [CMS-concurrent-preclean-start]
>> 1292.740: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1292.740: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1297.748:
>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1297.748: [GC[YG occupancy: 82135 K (118016 K)]1297.748: [Rescan
>> (parallel) , 0.0106180 secs]1297.759: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 94985K(139444K), 0.0107410 secs]
>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>> 1297.759: [CMS-concurrent-sweep-start]
>> 1297.760: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1297.761: [CMS-concurrent-reset-start]
>> 1297.769: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1299.769: [GC [1 CMS-initial-mark: 12849K(21428K)] 95113K(139444K),
>> 0.0105340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1299.780: [CMS-concurrent-mark-start]
>> 1299.796: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1299.796: [CMS-concurrent-preclean-start]
>> 1299.797: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1299.797: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1304.805:
>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>> user=0.69 sys=0.00, real=5.01 secs]
>> 1304.805: [GC[YG occupancy: 82583 K (118016 K)]1304.806: [Rescan
>> (parallel) , 0.0094010 secs]1304.815: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 95433K(139444K), 0.0095140 secs]
>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>> 1304.815: [CMS-concurrent-sweep-start]
>> 1304.817: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1304.817: [CMS-concurrent-reset-start]
>> 1304.827: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1306.827: [GC [1 CMS-initial-mark: 12849K(21428K)] 95561K(139444K),
>> 0.0107300 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1306.838: [CMS-concurrent-mark-start]
>> 1306.855: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1306.855: [CMS-concurrent-preclean-start]
>> 1306.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1306.855: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1311.858:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1311.858: [GC[YG occupancy: 83032 K (118016 K)]1311.858: [Rescan
>> (parallel) , 0.0094210 secs]1311.867: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 95882K(139444K), 0.0095360 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1311.868: [CMS-concurrent-sweep-start]
>> 1311.869: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1311.869: [CMS-concurrent-reset-start]
>> 1311.877: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1313.878: [GC [1 CMS-initial-mark: 12849K(21428K)] 96010K(139444K),
>> 0.0107870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1313.889: [CMS-concurrent-mark-start]
>> 1313.905: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1313.905: [CMS-concurrent-preclean-start]
>> 1313.906: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1313.906: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1318.914:
>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1318.915: [GC[YG occupancy: 83481 K (118016 K)]1318.915: [Rescan
>> (parallel) , 0.0096280 secs]1318.924: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 96331K(139444K), 0.0097340 secs]
>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>> 1318.925: [CMS-concurrent-sweep-start]
>> 1318.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1318.927: [CMS-concurrent-reset-start]
>> 1318.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1320.936: [GC [1 CMS-initial-mark: 12849K(21428K)] 96459K(139444K),
>> 0.0106300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1320.947: [CMS-concurrent-mark-start]
>> 1320.964: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1320.964: [CMS-concurrent-preclean-start]
>> 1320.965: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1320.965: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1325.991:
>> [CMS-concurrent-abortable-preclean: 0.717/5.026 secs] [Times:
>> user=0.73 sys=0.00, real=5.02 secs]
>> 1325.991: [GC[YG occupancy: 84205 K (118016 K)]1325.991: [Rescan
>> (parallel) , 0.0097880 secs]1326.001: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 97055K(139444K), 0.0099010 secs]
>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>> 1326.001: [CMS-concurrent-sweep-start]
>> 1326.003: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1326.003: [CMS-concurrent-reset-start]
>> 1326.012: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1328.013: [GC [1 CMS-initial-mark: 12849K(21428K)] 97183K(139444K),
>> 0.0109730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1328.024: [CMS-concurrent-mark-start]
>> 1328.039: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1328.039: [CMS-concurrent-preclean-start]
>> 1328.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1328.039: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1333.043:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 1333.043: [GC[YG occupancy: 84654 K (118016 K)]1333.043: [Rescan
>> (parallel) , 0.0110740 secs]1333.054: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 97504K(139444K), 0.0111760 secs]
>> [Times: user=0.12 sys=0.01, real=0.02 secs]
>> 1333.054: [CMS-concurrent-sweep-start]
>> 1333.056: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1333.056: [CMS-concurrent-reset-start]
>> 1333.065: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1335.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 97632K(139444K),
>> 0.0109300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1335.077: [CMS-concurrent-mark-start]
>> 1335.094: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1335.094: [CMS-concurrent-preclean-start]
>> 1335.094: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1335.094: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1340.103:
>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1340.103: [GC[YG occupancy: 85203 K (118016 K)]1340.103: [Rescan
>> (parallel) , 0.0109470 secs]1340.114: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 98052K(139444K), 0.0110500 secs]
>> [Times: user=0.11 sys=0.01, real=0.02 secs]
>> 1340.114: [CMS-concurrent-sweep-start]
>> 1340.116: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1340.116: [CMS-concurrent-reset-start]
>> 1340.125: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1342.126: [GC [1 CMS-initial-mark: 12849K(21428K)] 98181K(139444K),
>> 0.0109170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1342.137: [CMS-concurrent-mark-start]
>> 1342.154: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1342.154: [CMS-concurrent-preclean-start]
>> 1342.154: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1342.154: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1347.161:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1347.162: [GC[YG occupancy: 85652 K (118016 K)]1347.162: [Rescan
>> (parallel) , 0.0075610 secs]1347.169: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 98502K(139444K), 0.0076680 secs]
>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>> 1347.169: [CMS-concurrent-sweep-start]
>> 1347.171: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1347.172: [CMS-concurrent-reset-start]
>> 1347.181: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1349.181: [GC [1 CMS-initial-mark: 12849K(21428K)] 98630K(139444K),
>> 0.0109540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1349.192: [CMS-concurrent-mark-start]
>> 1349.208: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1349.208: [CMS-concurrent-preclean-start]
>> 1349.208: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1349.208: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1354.268:
>> [CMS-concurrent-abortable-preclean: 0.723/5.060 secs] [Times:
>> user=0.73 sys=0.00, real=5.06 secs]
>> 1354.268: [GC[YG occupancy: 86241 K (118016 K)]1354.268: [Rescan
>> (parallel) , 0.0099530 secs]1354.278: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 99091K(139444K), 0.0100670 secs]
>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>> 1354.278: [CMS-concurrent-sweep-start]
>> 1354.280: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1354.280: [CMS-concurrent-reset-start]
>> 1354.288: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1356.289: [GC [1 CMS-initial-mark: 12849K(21428K)] 99219K(139444K),
>> 0.0111450 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1356.300: [CMS-concurrent-mark-start]
>> 1356.316: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1356.316: [CMS-concurrent-preclean-start]
>> 1356.317: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1356.317: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1361.322:
>> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1361.322: [GC[YG occupancy: 86690 K (118016 K)]1361.322: [Rescan
>> (parallel) , 0.0097180 secs]1361.332: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 99540K(139444K), 0.0098210 secs]
>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>> 1361.332: [CMS-concurrent-sweep-start]
>> 1361.333: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1361.333: [CMS-concurrent-reset-start]
>> 1361.342: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1363.342: [GC [1 CMS-initial-mark: 12849K(21428K)] 99668K(139444K),
>> 0.0110230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1363.354: [CMS-concurrent-mark-start]
>> 1363.368: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1363.368: [CMS-concurrent-preclean-start]
>> 1363.369: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1363.369: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1368.378:
>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1368.378: [GC[YG occupancy: 87139 K (118016 K)]1368.378: [Rescan
>> (parallel) , 0.0100770 secs]1368.388: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 99989K(139444K), 0.0101900 secs]
>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>> 1368.388: [CMS-concurrent-sweep-start]
>> 1368.390: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1368.390: [CMS-concurrent-reset-start]
>> 1368.398: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1370.399: [GC [1 CMS-initial-mark: 12849K(21428K)] 100117K(139444K),
>> 0.0111810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1370.410: [CMS-concurrent-mark-start]
>> 1370.426: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1370.426: [CMS-concurrent-preclean-start]
>> 1370.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1370.427: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1375.447:
>> [CMS-concurrent-abortable-preclean: 0.715/5.020 secs] [Times:
>> user=0.72 sys=0.00, real=5.02 secs]
>> 1375.447: [GC[YG occupancy: 87588 K (118016 K)]1375.447: [Rescan
>> (parallel) , 0.0101690 secs]1375.457: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 100438K(139444K), 0.0102730 secs]
>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>> 1375.457: [CMS-concurrent-sweep-start]
>> 1375.459: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1375.459: [CMS-concurrent-reset-start]
>> 1375.467: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1377.467: [GC [1 CMS-initial-mark: 12849K(21428K)] 100566K(139444K),
>> 0.0110760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1377.478: [CMS-concurrent-mark-start]
>> 1377.495: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1377.495: [CMS-concurrent-preclean-start]
>> 1377.496: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1377.496: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1382.502:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.69 sys=0.01, real=5.00 secs]
>> 1382.502: [GC[YG occupancy: 89213 K (118016 K)]1382.502: [Rescan
>> (parallel) , 0.0108630 secs]1382.513: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 102063K(139444K), 0.0109700 secs]
>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>> 1382.513: [CMS-concurrent-sweep-start]
>> 1382.514: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1382.514: [CMS-concurrent-reset-start]
>> 1382.523: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1382.743: [GC [1 CMS-initial-mark: 12849K(21428K)] 102127K(139444K),
>> 0.0113140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1382.755: [CMS-concurrent-mark-start]
>> 1382.773: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1382.773: [CMS-concurrent-preclean-start]
>> 1382.774: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1382.774: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1387.777:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1387.777: [GC[YG occupancy: 89638 K (118016 K)]1387.777: [Rescan
>> (parallel) , 0.0113310 secs]1387.789: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 102488K(139444K), 0.0114430 secs]
>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>> 1387.789: [CMS-concurrent-sweep-start]
>> 1387.790: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1387.790: [CMS-concurrent-reset-start]
>> 1387.799: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1389.799: [GC [1 CMS-initial-mark: 12849K(21428K)] 102617K(139444K),
>> 0.0113540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1389.810: [CMS-concurrent-mark-start]
>> 1389.827: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1389.827: [CMS-concurrent-preclean-start]
>> 1389.827: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1389.827: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1394.831:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1394.831: [GC[YG occupancy: 90088 K (118016 K)]1394.831: [Rescan
>> (parallel) , 0.0103790 secs]1394.841: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 102938K(139444K), 0.0104960 secs]
>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>> 1394.842: [CMS-concurrent-sweep-start]
>> 1394.844: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1394.844: [CMS-concurrent-reset-start]
>> 1394.853: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1396.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 103066K(139444K),
>> 0.0114740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1396.865: [CMS-concurrent-mark-start]
>> 1396.880: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1396.880: [CMS-concurrent-preclean-start]
>> 1396.881: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1396.881: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1401.890:
>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1401.890: [GC[YG occupancy: 90537 K (118016 K)]1401.891: [Rescan
>> (parallel) , 0.0116110 secs]1401.902: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 103387K(139444K), 0.0117240 secs]
>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>> 1401.902: [CMS-concurrent-sweep-start]
>> 1401.904: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1401.904: [CMS-concurrent-reset-start]
>> 1401.914: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1403.914: [GC [1 CMS-initial-mark: 12849K(21428K)] 103515K(139444K),
>> 0.0111980 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1403.925: [CMS-concurrent-mark-start]
>> 1403.943: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.01 secs]
>> 1403.943: [CMS-concurrent-preclean-start]
>> 1403.944: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.01 secs]
>> 1403.944: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1408.982:
>> [CMS-concurrent-abortable-preclean: 0.718/5.038 secs] [Times:
>> user=0.72 sys=0.00, real=5.03 secs]
>> 1408.982: [GC[YG occupancy: 90986 K (118016 K)]1408.982: [Rescan
>> (parallel) , 0.0115260 secs]1408.994: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 103836K(139444K), 0.0116320 secs]
>> [Times: user=0.13 sys=0.00, real=0.02 secs]
>> 1408.994: [CMS-concurrent-sweep-start]
>> 1408.996: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1408.996: [CMS-concurrent-reset-start]
>> 1409.005: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1411.005: [GC [1 CMS-initial-mark: 12849K(21428K)] 103964K(139444K),
>> 0.0114590 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1411.017: [CMS-concurrent-mark-start]
>> 1411.034: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1411.034: [CMS-concurrent-preclean-start]
>> 1411.034: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1411.034: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1416.140:
>> [CMS-concurrent-abortable-preclean: 0.712/5.105 secs] [Times:
>> user=0.71 sys=0.00, real=5.10 secs]
>> 1416.140: [GC[YG occupancy: 91476 K (118016 K)]1416.140: [Rescan
>> (parallel) , 0.0114950 secs]1416.152: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 104326K(139444K), 0.0116020 secs]
>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>> 1416.152: [CMS-concurrent-sweep-start]
>> 1416.154: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1416.154: [CMS-concurrent-reset-start]
>> 1416.163: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1418.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 104454K(139444K),
>> 0.0114040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1418.175: [CMS-concurrent-mark-start]
>> 1418.191: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1418.191: [CMS-concurrent-preclean-start]
>> 1418.191: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1418.191: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1423.198:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1423.199: [GC[YG occupancy: 91925 K (118016 K)]1423.199: [Rescan
>> (parallel) , 0.0105460 secs]1423.209: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 104775K(139444K), 0.0106640 secs]
>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>> 1423.209: [CMS-concurrent-sweep-start]
>> 1423.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1423.211: [CMS-concurrent-reset-start]
>> 1423.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1425.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 104903K(139444K),
>> 0.0116300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1425.232: [CMS-concurrent-mark-start]
>> 1425.248: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1425.248: [CMS-concurrent-preclean-start]
>> 1425.248: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1425.248: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1430.252:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 1430.252: [GC[YG occupancy: 92374 K (118016 K)]1430.252: [Rescan
>> (parallel) , 0.0098720 secs]1430.262: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 105224K(139444K), 0.0099750 secs]
>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>> 1430.262: [CMS-concurrent-sweep-start]
>> 1430.264: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1430.264: [CMS-concurrent-reset-start]
>> 1430.273: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1432.274: [GC [1 CMS-initial-mark: 12849K(21428K)] 105352K(139444K),
>> 0.0114050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1432.285: [CMS-concurrent-mark-start]
>> 1432.301: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1432.301: [CMS-concurrent-preclean-start]
>> 1432.301: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1432.301: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1437.304:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 1437.305: [GC[YG occupancy: 92823 K (118016 K)]1437.305: [Rescan
>> (parallel) , 0.0115010 secs]1437.316: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 105673K(139444K), 0.0116090 secs]
>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>> 1437.316: [CMS-concurrent-sweep-start]
>> 1437.319: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1437.319: [CMS-concurrent-reset-start]
>> 1437.328: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1439.328: [GC [1 CMS-initial-mark: 12849K(21428K)] 105801K(139444K),
>> 0.0115740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1439.340: [CMS-concurrent-mark-start]
>> 1439.356: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1439.356: [CMS-concurrent-preclean-start]
>> 1439.356: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1439.356: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1444.411:
>> [CMS-concurrent-abortable-preclean: 0.715/5.054 secs] [Times:
>> user=0.72 sys=0.00, real=5.05 secs]
>> 1444.411: [GC[YG occupancy: 93547 K (118016 K)]1444.411: [Rescan
>> (parallel) , 0.0072910 secs]1444.418: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 106397K(139444K), 0.0073970 secs]
>> [Times: user=0.09 sys=0.00, real=0.01 secs]
>> 1444.419: [CMS-concurrent-sweep-start]
>> 1444.420: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1444.420: [CMS-concurrent-reset-start]
>> 1444.429: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1446.429: [GC [1 CMS-initial-mark: 12849K(21428K)] 106525K(139444K),
>> 0.0117950 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1446.441: [CMS-concurrent-mark-start]
>> 1446.457: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1446.457: [CMS-concurrent-preclean-start]
>> 1446.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1446.458: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1451.461:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1451.461: [GC[YG occupancy: 93996 K (118016 K)]1451.461: [Rescan
>> (parallel) , 0.0120870 secs]1451.473: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 106846K(139444K), 0.0121920 secs]
>> [Times: user=0.14 sys=0.00, real=0.02 secs]
>> 1451.473: [CMS-concurrent-sweep-start]
>> 1451.476: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1451.476: [CMS-concurrent-reset-start]
>> 1451.485: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1453.485: [GC [1 CMS-initial-mark: 12849K(21428K)] 106974K(139444K),
>> 0.0117990 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1453.497: [CMS-concurrent-mark-start]
>> 1453.514: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1453.514: [CMS-concurrent-preclean-start]
>> 1453.515: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1453.515: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1458.518:
>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1458.518: [GC[YG occupancy: 94445 K (118016 K)]1458.518: [Rescan
>> (parallel) , 0.0123720 secs]1458.530: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 107295K(139444K), 0.0124750 secs]
>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>> 1458.530: [CMS-concurrent-sweep-start]
>> 1458.532: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1458.532: [CMS-concurrent-reset-start]
>> 1458.540: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1460.541: [GC [1 CMS-initial-mark: 12849K(21428K)] 107423K(139444K),
>> 0.0118680 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1460.553: [CMS-concurrent-mark-start]
>> 1460.568: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1460.568: [CMS-concurrent-preclean-start]
>> 1460.569: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1460.569: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1465.577:
>> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1465.577: [GC[YG occupancy: 94894 K (118016 K)]1465.577: [Rescan
>> (parallel) , 0.0119100 secs]1465.589: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 107744K(139444K), 0.0120270 secs]
>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>> 1465.590: [CMS-concurrent-sweep-start]
>> 1465.591: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1465.591: [CMS-concurrent-reset-start]
>> 1465.600: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1467.600: [GC [1 CMS-initial-mark: 12849K(21428K)] 107937K(139444K),
>> 0.0120020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1467.612: [CMS-concurrent-mark-start]
>> 1467.628: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1467.628: [CMS-concurrent-preclean-start]
>> 1467.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1467.628: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1472.636:
>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1472.637: [GC[YG occupancy: 95408 K (118016 K)]1472.637: [Rescan
>> (parallel) , 0.0119090 secs]1472.649: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 108257K(139444K), 0.0120260 secs]
>> [Times: user=0.13 sys=0.00, real=0.01 secs]
>> 1472.649: [CMS-concurrent-sweep-start]
>> 1472.650: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1472.650: [CMS-concurrent-reset-start]
>> 1472.659: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1472.775: [GC [1 CMS-initial-mark: 12849K(21428K)] 108365K(139444K),
>> 0.0120260 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1472.787: [CMS-concurrent-mark-start]
>> 1472.805: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1472.805: [CMS-concurrent-preclean-start]
>> 1472.806: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.01 sys=0.00, real=0.00 secs]
>> 1472.806: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1477.808:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 1477.808: [GC[YG occupancy: 95876 K (118016 K)]1477.808: [Rescan
>> (parallel) , 0.0099490 secs]1477.818: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 108726K(139444K), 0.0100580 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1477.818: [CMS-concurrent-sweep-start]
>> 1477.820: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1477.820: [CMS-concurrent-reset-start]
>> 1477.828: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1479.829: [GC [1 CMS-initial-mark: 12849K(21428K)] 108854K(139444K),
>> 0.0119550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1479.841: [CMS-concurrent-mark-start]
>> 1479.857: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1479.857: [CMS-concurrent-preclean-start]
>> 1479.857: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1479.857: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1484.870:
>> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1484.870: [GC[YG occupancy: 96325 K (118016 K)]1484.870: [Rescan
>> (parallel) , 0.0122870 secs]1484.882: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 109175K(139444K), 0.0123900 secs]
>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>> 1484.882: [CMS-concurrent-sweep-start]
>> 1484.884: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1484.884: [CMS-concurrent-reset-start]
>> 1484.893: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1486.893: [GC [1 CMS-initial-mark: 12849K(21428K)] 109304K(139444K),
>> 0.0118470 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>> 1486.905: [CMS-concurrent-mark-start]
>> 1486.921: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.01 secs]
>> 1486.921: [CMS-concurrent-preclean-start]
>> 1486.921: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1486.921: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1491.968:
>> [CMS-concurrent-abortable-preclean: 0.720/5.047 secs] [Times:
>> user=0.72 sys=0.00, real=5.05 secs]
>> 1491.968: [GC[YG occupancy: 96774 K (118016 K)]1491.968: [Rescan
>> (parallel) , 0.0122850 secs]1491.981: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 109624K(139444K), 0.0123880 secs]
>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>> 1491.981: [CMS-concurrent-sweep-start]
>> 1491.982: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1491.982: [CMS-concurrent-reset-start]
>> 1491.991: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1493.991: [GC [1 CMS-initial-mark: 12849K(21428K)] 109753K(139444K),
>> 0.0119790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1494.004: [CMS-concurrent-mark-start]
>> 1494.019: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1494.019: [CMS-concurrent-preclean-start]
>> 1494.019: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1494.019: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1499.100:
>> [CMS-concurrent-abortable-preclean: 0.722/5.080 secs] [Times:
>> user=0.72 sys=0.00, real=5.08 secs]
>> 1499.100: [GC[YG occupancy: 98295 K (118016 K)]1499.100: [Rescan
>> (parallel) , 0.0123180 secs]1499.112: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 111145K(139444K), 0.0124240 secs]
>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>> 1499.113: [CMS-concurrent-sweep-start]
>> 1499.114: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1499.114: [CMS-concurrent-reset-start]
>> 1499.123: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1501.123: [GC [1 CMS-initial-mark: 12849K(21428K)] 111274K(139444K),
>> 0.0117720 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
>> 1501.135: [CMS-concurrent-mark-start]
>> 1501.150: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 1501.150: [CMS-concurrent-preclean-start]
>> 1501.151: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.01 sys=0.00, real=0.00 secs]
>> 1501.151: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1506.172:
>> [CMS-concurrent-abortable-preclean: 0.712/5.022 secs] [Times:
>> user=0.71 sys=0.00, real=5.02 secs]
>> 1506.172: [GC[YG occupancy: 98890 K (118016 K)]1506.173: [Rescan
>> (parallel) , 0.0113790 secs]1506.184: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 111740K(139444K), 0.0114830 secs]
>> [Times: user=0.13 sys=0.00, real=0.02 secs]
>> 1506.184: [CMS-concurrent-sweep-start]
>> 1506.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1506.186: [CMS-concurrent-reset-start]
>> 1506.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1508.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 111868K(139444K),
>> 0.0122930 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1508.208: [CMS-concurrent-mark-start]
>> 1508.225: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1508.225: [CMS-concurrent-preclean-start]
>> 1508.225: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1508.226: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1513.232:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1513.232: [GC[YG occupancy: 99339 K (118016 K)]1513.232: [Rescan
>> (parallel) , 0.0123890 secs]1513.244: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 112189K(139444K), 0.0124930 secs]
>> [Times: user=0.14 sys=0.00, real=0.02 secs]
>> 1513.245: [CMS-concurrent-sweep-start]
>> 1513.246: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1513.246: [CMS-concurrent-reset-start]
>> 1513.255: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1515.256: [GC [1 CMS-initial-mark: 12849K(21428K)] 113182K(139444K),
>> 0.0123210 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1515.268: [CMS-concurrent-mark-start]
>> 1515.285: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1515.285: [CMS-concurrent-preclean-start]
>> 1515.285: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1515.285: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1520.290:
>> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1520.290: [GC[YG occupancy: 100653 K (118016 K)]1520.290: [Rescan
>> (parallel) , 0.0125490 secs]1520.303: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 113502K(139444K), 0.0126520 secs]
>> [Times: user=0.14 sys=0.00, real=0.01 secs]
>> 1520.303: [CMS-concurrent-sweep-start]
>> 1520.304: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1520.304: [CMS-concurrent-reset-start]
>> 1520.313: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1522.314: [GC [1 CMS-initial-mark: 12849K(21428K)] 113631K(139444K),
>> 0.0118790 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1522.326: [CMS-concurrent-mark-start]
>> 1522.343: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.01 secs]
>> 1522.343: [CMS-concurrent-preclean-start]
>> 1522.343: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1522.343: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1527.350:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1527.350: [GC[YG occupancy: 101102 K (118016 K)]1527.350: [Rescan
>> (parallel) , 0.0127460 secs]1527.363: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 113952K(139444K), 0.0128490 secs]
>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>> 1527.363: [CMS-concurrent-sweep-start]
>> 1527.365: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1527.365: [CMS-concurrent-reset-start]
>> 1527.374: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1529.374: [GC [1 CMS-initial-mark: 12849K(21428K)] 114080K(139444K),
>> 0.0117550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1529.386: [CMS-concurrent-mark-start]
>> 1529.403: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1529.404: [CMS-concurrent-preclean-start]
>> 1529.404: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1529.404: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1534.454:
>> [CMS-concurrent-abortable-preclean: 0.712/5.050 secs] [Times:
>> user=0.70 sys=0.01, real=5.05 secs]
>> 1534.454: [GC[YG occupancy: 101591 K (118016 K)]1534.454: [Rescan
>> (parallel) , 0.0122680 secs]1534.466: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 114441K(139444K), 0.0123750 secs]
>> [Times: user=0.12 sys=0.02, real=0.01 secs]
>> 1534.466: [CMS-concurrent-sweep-start]
>> 1534.468: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1534.468: [CMS-concurrent-reset-start]
>> 1534.478: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1536.478: [GC [1 CMS-initial-mark: 12849K(21428K)] 114570K(139444K),
>> 0.0125250 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1536.491: [CMS-concurrent-mark-start]
>> 1536.507: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1536.507: [CMS-concurrent-preclean-start]
>> 1536.507: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1536.507: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1541.516:
>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1541.516: [GC[YG occupancy: 102041 K (118016 K)]1541.516: [Rescan
>> (parallel) , 0.0088270 secs]1541.525: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 114890K(139444K), 0.0089300 secs]
>> [Times: user=0.10 sys=0.00, real=0.01 secs]
>> 1541.525: [CMS-concurrent-sweep-start]
>> 1541.527: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1541.527: [CMS-concurrent-reset-start]
>> 1541.537: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1543.537: [GC [1 CMS-initial-mark: 12849K(21428K)] 115019K(139444K),
>> 0.0124500 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1543.550: [CMS-concurrent-mark-start]
>> 1543.566: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1543.566: [CMS-concurrent-preclean-start]
>> 1543.567: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1543.567: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1548.578:
>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1548.578: [GC[YG occupancy: 102490 K (118016 K)]1548.578: [Rescan
>> (parallel) , 0.0100430 secs]1548.588: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 115340K(139444K), 0.0101440 secs]
>> [Times: user=0.11 sys=0.00, real=0.01 secs]
>> 1548.588: [CMS-concurrent-sweep-start]
>> 1548.589: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1548.589: [CMS-concurrent-reset-start]
>> 1548.598: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1550.598: [GC [1 CMS-initial-mark: 12849K(21428K)] 115468K(139444K),
>> 0.0125070 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1550.611: [CMS-concurrent-mark-start]
>> 1550.627: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1550.627: [CMS-concurrent-preclean-start]
>> 1550.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1550.628: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1555.631:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 1555.631: [GC[YG occupancy: 103003 K (118016 K)]1555.631: [Rescan
>> (parallel) , 0.0117610 secs]1555.643: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 115853K(139444K), 0.0118770 secs]
>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>> 1555.643: [CMS-concurrent-sweep-start]
>> 1555.645: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1555.645: [CMS-concurrent-reset-start]
>> 1555.655: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1557.655: [GC [1 CMS-initial-mark: 12849K(21428K)] 115981K(139444K),
>> 0.0126720 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1557.668: [CMS-concurrent-mark-start]
>> 1557.685: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1557.685: [CMS-concurrent-preclean-start]
>> 1557.685: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1557.685: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1562.688:
>> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1562.688: [GC[YG occupancy: 103557 K (118016 K)]1562.688: [Rescan
>> (parallel) , 0.0121530 secs]1562.700: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 116407K(139444K), 0.0122560 secs]
>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>> 1562.700: [CMS-concurrent-sweep-start]
>> 1562.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1562.701: [CMS-concurrent-reset-start]
>> 1562.710: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1562.821: [GC [1 CMS-initial-mark: 12849K(21428K)] 116514K(139444K),
>> 0.0127240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1562.834: [CMS-concurrent-mark-start]
>> 1562.852: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.01 secs]
>> 1562.852: [CMS-concurrent-preclean-start]
>> 1562.853: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1562.853: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1567.859:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1567.859: [GC[YG occupancy: 104026 K (118016 K)]1567.859: [Rescan
>> (parallel) , 0.0131290 secs]1567.872: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 116876K(139444K), 0.0132470 secs]
>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>> 1567.873: [CMS-concurrent-sweep-start]
>> 1567.874: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1567.874: [CMS-concurrent-reset-start]
>> 1567.883: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1569.883: [GC [1 CMS-initial-mark: 12849K(21428K)] 117103K(139444K),
>> 0.0123770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>> 1569.896: [CMS-concurrent-mark-start]
>> 1569.913: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.01 secs]
>> 1569.913: [CMS-concurrent-preclean-start]
>> 1569.913: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.01 secs]
>> 1569.913: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1574.920:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1574.920: [GC[YG occupancy: 104510 K (118016 K)]1574.920: [Rescan
>> (parallel) , 0.0122810 secs]1574.932: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 117360K(139444K), 0.0123870 secs]
>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>> 1574.933: [CMS-concurrent-sweep-start]
>> 1574.935: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1574.935: [CMS-concurrent-reset-start]
>> 1574.944: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1575.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 117360K(139444K),
>> 0.0121590 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
>> 1575.176: [CMS-concurrent-mark-start]
>> 1575.193: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1575.193: [CMS-concurrent-preclean-start]
>> 1575.193: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.01 secs]
>> 1575.193: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1580.197:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.71 sys=0.00, real=5.00 secs]
>> 1580.197: [GC[YG occupancy: 104831 K (118016 K)]1580.197: [Rescan
>> (parallel) , 0.0129860 secs]1580.210: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 117681K(139444K), 0.0130980 secs]
>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>> 1580.210: [CMS-concurrent-sweep-start]
>> 1580.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1580.211: [CMS-concurrent-reset-start]
>> 1580.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1582.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 117809K(139444K),
>> 0.0129700 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1582.234: [CMS-concurrent-mark-start]
>> 1582.249: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
>> sys=0.01, real=0.02 secs]
>> 1582.249: [CMS-concurrent-preclean-start]
>> 1582.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1582.249: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1587.262:
>> [CMS-concurrent-abortable-preclean: 0.707/5.013 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1587.262: [GC[YG occupancy: 105280 K (118016 K)]1587.262: [Rescan
>> (parallel) , 0.0134570 secs]1587.276: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 118130K(139444K), 0.0135720 secs]
>> [Times: user=0.15 sys=0.00, real=0.02 secs]
>> 1587.276: [CMS-concurrent-sweep-start]
>> 1587.278: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1587.278: [CMS-concurrent-reset-start]
>> 1587.287: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1589.287: [GC [1 CMS-initial-mark: 12849K(21428K)] 118258K(139444K),
>> 0.0130010 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1589.301: [CMS-concurrent-mark-start]
>> 1589.316: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1589.316: [CMS-concurrent-preclean-start]
>> 1589.316: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1589.316: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1594.364:
>> [CMS-concurrent-abortable-preclean: 0.712/5.048 secs] [Times:
>> user=0.71 sys=0.00, real=5.05 secs]
>> 1594.365: [GC[YG occupancy: 105770 K (118016 K)]1594.365: [Rescan
>> (parallel) , 0.0131190 secs]1594.378: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 118620K(139444K), 0.0132380 secs]
>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>> 1594.378: [CMS-concurrent-sweep-start]
>> 1594.380: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1594.380: [CMS-concurrent-reset-start]
>> 1594.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1596.390: [GC [1 CMS-initial-mark: 12849K(21428K)] 118748K(139444K),
>> 0.0130650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1596.403: [CMS-concurrent-mark-start]
>> 1596.418: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1596.418: [CMS-concurrent-preclean-start]
>> 1596.419: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1596.419: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1601.422:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.69 sys=0.01, real=5.00 secs]
>> 1601.422: [GC[YG occupancy: 106219 K (118016 K)]1601.422: [Rescan
>> (parallel) , 0.0130310 secs]1601.435: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 119069K(139444K), 0.0131490 secs]
>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>> 1601.435: [CMS-concurrent-sweep-start]
>> 1601.437: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1601.437: [CMS-concurrent-reset-start]
>> 1601.446: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1603.447: [GC [1 CMS-initial-mark: 12849K(21428K)] 119197K(139444K),
>> 0.0130220 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1603.460: [CMS-concurrent-mark-start]
>> 1603.476: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1603.476: [CMS-concurrent-preclean-start]
>> 1603.476: [CMS-concurrent-preclean: 0.000/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1603.476: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1608.478:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1608.478: [GC[YG occupancy: 106668 K (118016 K)]1608.479: [Rescan
>> (parallel) , 0.0122680 secs]1608.491: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 119518K(139444K), 0.0123790 secs]
>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>> 1608.491: [CMS-concurrent-sweep-start]
>> 1608.492: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1608.492: [CMS-concurrent-reset-start]
>> 1608.501: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1610.502: [GC [1 CMS-initial-mark: 12849K(21428K)] 119646K(139444K),
>> 0.0130770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>> 1610.515: [CMS-concurrent-mark-start]
>> 1610.530: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1610.530: [CMS-concurrent-preclean-start]
>> 1610.530: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1610.530: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1615.536:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1615.536: [GC[YG occupancy: 107117 K (118016 K)]1615.536: [Rescan
>> (parallel) , 0.0125470 secs]1615.549: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 119967K(139444K), 0.0126510 secs]
>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>> 1615.549: [CMS-concurrent-sweep-start]
>> 1615.551: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1615.551: [CMS-concurrent-reset-start]
>> 1615.561: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1617.561: [GC [1 CMS-initial-mark: 12849K(21428K)] 120095K(139444K),
>> 0.0129520 secs] [Times: user=0.00 sys=0.00, real=0.02 secs]
>> 1617.574: [CMS-concurrent-mark-start]
>> 1617.591: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.01 secs]
>> 1617.591: [CMS-concurrent-preclean-start]
>> 1617.591: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1617.591: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1622.598:
>> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1622.598: [GC[YG occupancy: 107777 K (118016 K)]1622.599: [Rescan
>> (parallel) , 0.0140340 secs]1622.613: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 120627K(139444K), 0.0141520 secs]
>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>> 1622.613: [CMS-concurrent-sweep-start]
>> 1622.614: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1622.614: [CMS-concurrent-reset-start]
>> 1622.623: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.02 secs]
>> 1622.848: [GC [1 CMS-initial-mark: 12849K(21428K)] 120691K(139444K),
>> 0.0133410 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1622.861: [CMS-concurrent-mark-start]
>> 1622.878: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1622.878: [CMS-concurrent-preclean-start]
>> 1622.879: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1622.879: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1627.941:
>> [CMS-concurrent-abortable-preclean: 0.656/5.062 secs] [Times:
>> user=0.65 sys=0.00, real=5.06 secs]
>> 1627.941: [GC[YG occupancy: 108202 K (118016 K)]1627.941: [Rescan
>> (parallel) , 0.0135120 secs]1627.955: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 121052K(139444K), 0.0136620 secs]
>> [Times: user=0.15 sys=0.00, real=0.02 secs]
>> 1627.955: [CMS-concurrent-sweep-start]
>> 1627.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1627.956: [CMS-concurrent-reset-start]
>> 1627.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1629.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 121180K(139444K),
>> 0.0133770 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1629.979: [CMS-concurrent-mark-start]
>> 1629.995: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1629.995: [CMS-concurrent-preclean-start]
>> 1629.996: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1629.996: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1634.998:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.69 sys=0.00, real=5.00 secs]
>> 1634.999: [GC[YG occupancy: 108651 K (118016 K)]1634.999: [Rescan
>> (parallel) , 0.0134300 secs]1635.012: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 121501K(139444K), 0.0135530 secs]
>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>> 1635.012: [CMS-concurrent-sweep-start]
>> 1635.014: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1635.014: [CMS-concurrent-reset-start]
>> 1635.023: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1637.023: [GC [1 CMS-initial-mark: 12849K(21428K)] 121629K(139444K),
>> 0.0127330 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>> 1637.036: [CMS-concurrent-mark-start]
>> 1637.053: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1637.054: [CMS-concurrent-preclean-start]
>> 1637.054: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1637.054: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1642.062:
>> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1642.062: [GC[YG occupancy: 109100 K (118016 K)]1642.062: [Rescan
>> (parallel) , 0.0124310 secs]1642.075: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 121950K(139444K), 0.0125510 secs]
>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>> 1642.075: [CMS-concurrent-sweep-start]
>> 1642.077: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1642.077: [CMS-concurrent-reset-start]
>> 1642.086: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1644.087: [GC [1 CMS-initial-mark: 12849K(21428K)] 122079K(139444K),
>> 0.0134300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1644.100: [CMS-concurrent-mark-start]
>> 1644.116: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1644.116: [CMS-concurrent-preclean-start]
>> 1644.116: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1644.116: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1649.125:
>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1649.126: [GC[YG occupancy: 109549 K (118016 K)]1649.126: [Rescan
>> (parallel) , 0.0126870 secs]1649.138: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 122399K(139444K), 0.0128010 secs]
>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>> 1649.139: [CMS-concurrent-sweep-start]
>> 1649.141: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1649.141: [CMS-concurrent-reset-start]
>> 1649.150: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1651.150: [GC [1 CMS-initial-mark: 12849K(21428K)] 122528K(139444K),
>> 0.0134790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1651.164: [CMS-concurrent-mark-start]
>> 1651.179: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1651.179: [CMS-concurrent-preclean-start]
>> 1651.179: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1651.179: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1656.254:
>> [CMS-concurrent-abortable-preclean: 0.722/5.074 secs] [Times:
>> user=0.71 sys=0.01, real=5.07 secs]
>> 1656.254: [GC[YG occupancy: 110039 K (118016 K)]1656.254: [Rescan
>> (parallel) , 0.0092110 secs]1656.263: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 122889K(139444K), 0.0093170 secs]
>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>> 1656.263: [CMS-concurrent-sweep-start]
>> 1656.266: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1656.266: [CMS-concurrent-reset-start]
>> 1656.275: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1658.275: [GC [1 CMS-initial-mark: 12849K(21428K)] 123017K(139444K),
>> 0.0134150 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1658.289: [CMS-concurrent-mark-start]
>> 1658.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1658.306: [CMS-concurrent-preclean-start]
>> 1658.306: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1658.306: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1663.393:
>> [CMS-concurrent-abortable-preclean: 0.711/5.087 secs] [Times:
>> user=0.71 sys=0.00, real=5.08 secs]
>> 1663.393: [GC[YG occupancy: 110488 K (118016 K)]1663.393: [Rescan
>> (parallel) , 0.0132450 secs]1663.406: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 123338K(139444K), 0.0133600 secs]
>> [Times: user=0.15 sys=0.00, real=0.02 secs]
>> 1663.407: [CMS-concurrent-sweep-start]
>> 1663.409: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1663.409: [CMS-concurrent-reset-start]
>> 1663.418: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1665.418: [GC [1 CMS-initial-mark: 12849K(21428K)] 123467K(139444K),
>> 0.0135570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1665.432: [CMS-concurrent-mark-start]
>> 1665.447: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1665.447: [CMS-concurrent-preclean-start]
>> 1665.448: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1665.448: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1670.457:
>> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1670.457: [GC[YG occupancy: 110937 K (118016 K)]1670.457: [Rescan
>> (parallel) , 0.0142820 secs]1670.471: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 123787K(139444K), 0.0144010 secs]
>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>> 1670.472: [CMS-concurrent-sweep-start]
>> 1670.473: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1670.473: [CMS-concurrent-reset-start]
>> 1670.482: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1672.482: [GC [1 CMS-initial-mark: 12849K(21428K)] 123916K(139444K),
>> 0.0136110 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>> 1672.496: [CMS-concurrent-mark-start]
>> 1672.513: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1672.513: [CMS-concurrent-preclean-start]
>> 1672.513: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1672.513: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1677.530:
>> [CMS-concurrent-abortable-preclean: 0.711/5.017 secs] [Times:
>> user=0.71 sys=0.00, real=5.02 secs]
>> 1677.530: [GC[YG occupancy: 111387 K (118016 K)]1677.530: [Rescan
>> (parallel) , 0.0129210 secs]1677.543: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 124236K(139444K), 0.0130360 secs]
>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>> 1677.543: [CMS-concurrent-sweep-start]
>> 1677.545: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1677.545: [CMS-concurrent-reset-start]
>> 1677.554: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1679.554: [GC [1 CMS-initial-mark: 12849K(21428K)] 124365K(139444K),
>> 0.0125140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1679.567: [CMS-concurrent-mark-start]
>> 1679.584: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1679.584: [CMS-concurrent-preclean-start]
>> 1679.584: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1679.584: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1684.631:
>> [CMS-concurrent-abortable-preclean: 0.714/5.047 secs] [Times:
>> user=0.72 sys=0.00, real=5.04 secs]
>> 1684.631: [GC[YG occupancy: 112005 K (118016 K)]1684.631: [Rescan
>> (parallel) , 0.0146760 secs]1684.646: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 124855K(139444K), 0.0147930 secs]
>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>> 1684.646: [CMS-concurrent-sweep-start]
>> 1684.648: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1684.648: [CMS-concurrent-reset-start]
>> 1684.656: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1686.656: [GC [1 CMS-initial-mark: 12849K(21428K)] 125048K(139444K),
>> 0.0138340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1686.670: [CMS-concurrent-mark-start]
>> 1686.686: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1686.686: [CMS-concurrent-preclean-start]
>> 1686.687: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1686.687: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1691.689:
>> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1691.689: [GC[YG occupancy: 112518 K (118016 K)]1691.689: [Rescan
>> (parallel) , 0.0142600 secs]1691.703: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 12849K(21428K)] 125368K(139444K), 0.0143810 secs]
>> [Times: user=0.16 sys=0.00, real=0.02 secs]
>> 1691.703: [CMS-concurrent-sweep-start]
>> 1691.705: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1691.705: [CMS-concurrent-reset-start]
>> 1691.714: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1693.714: [GC [1 CMS-initial-mark: 12849K(21428K)] 125497K(139444K),
>> 0.0126710 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1693.727: [CMS-concurrent-mark-start]
>> 1693.744: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1693.744: [CMS-concurrent-preclean-start]
>> 1693.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1693.745: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1698.747:
>> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1698.748: [GC[YG occupancy: 112968 K (118016 K)]1698.748: [Rescan
>> (parallel) , 0.0147370 secs]1698.762: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 125818K(139444K), 0.0148490 secs]
>> [Times: user=0.17 sys=0.00, real=0.01 secs]
>> 1698.763: [CMS-concurrent-sweep-start]
>> 1698.764: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1698.764: [CMS-concurrent-reset-start]
>> 1698.773: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1700.773: [GC [1 CMS-initial-mark: 12849K(21428K)] 125946K(139444K),
>> 0.0128810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1700.786: [CMS-concurrent-mark-start]
>> 1700.804: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1700.804: [CMS-concurrent-preclean-start]
>> 1700.804: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1700.804: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1705.810:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1705.810: [GC[YG occupancy: 113417 K (118016 K)]1705.810: [Rescan
>> (parallel) , 0.0146750 secs]1705.825: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 126267K(139444K), 0.0147760 secs]
>> [Times: user=0.17 sys=0.00, real=0.02 secs]
>> 1705.825: [CMS-concurrent-sweep-start]
>> 1705.827: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1705.827: [CMS-concurrent-reset-start]
>> 1705.836: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1707.836: [GC [1 CMS-initial-mark: 12849K(21428K)] 126395K(139444K),
>> 0.0137570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1707.850: [CMS-concurrent-mark-start]
>> 1707.866: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1707.866: [CMS-concurrent-preclean-start]
>> 1707.867: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1707.867: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1712.878:
>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1712.878: [GC[YG occupancy: 113866 K (118016 K)]1712.878: [Rescan
>> (parallel) , 0.0116340 secs]1712.890: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 126716K(139444K), 0.0117350 secs]
>> [Times: user=0.12 sys=0.00, real=0.01 secs]
>> 1712.890: [CMS-concurrent-sweep-start]
>> 1712.893: [CMS-concurrent-sweep: 0.002/0.003 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1712.893: [CMS-concurrent-reset-start]
>> 1712.902: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1714.902: [GC [1 CMS-initial-mark: 12849K(21428K)] 126984K(139444K),
>> 0.0134590 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>> 1714.915: [CMS-concurrent-mark-start]
>> 1714.933: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1714.933: [CMS-concurrent-preclean-start]
>> 1714.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1714.934: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1719.940:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.71 sys=0.00, real=5.00 secs]
>> 1719.940: [GC[YG occupancy: 114552 K (118016 K)]1719.940: [Rescan
>> (parallel) , 0.0141320 secs]1719.955: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 127402K(139444K), 0.0142280 secs]
>> [Times: user=0.16 sys=0.01, real=0.02 secs]
>> 1719.955: [CMS-concurrent-sweep-start]
>> 1719.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1719.956: [CMS-concurrent-reset-start]
>> 1719.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1721.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 127530K(139444K),
>> 0.0139120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1721.980: [CMS-concurrent-mark-start]
>> 1721.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1721.996: [CMS-concurrent-preclean-start]
>> 1721.997: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1721.997: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1727.010:
>> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
>> user=0.71 sys=0.00, real=5.01 secs]
>> 1727.010: [GC[YG occupancy: 115000 K (118016 K)]1727.010: [Rescan
>> (parallel) , 0.0123190 secs]1727.023: [weak refs processing, 0.0000130
>> secs] [1 CMS-remark: 12849K(21428K)] 127850K(139444K), 0.0124420 secs]
>> [Times: user=0.15 sys=0.00, real=0.01 secs]
>> 1727.023: [CMS-concurrent-sweep-start]
>> 1727.024: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1727.024: [CMS-concurrent-reset-start]
>> 1727.033: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1729.034: [GC [1 CMS-initial-mark: 12849K(21428K)] 127978K(139444K),
>> 0.0129330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1729.047: [CMS-concurrent-mark-start]
>> 1729.064: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1729.064: [CMS-concurrent-preclean-start]
>> 1729.064: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1729.064: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1734.075:
>> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1734.075: [GC[YG occupancy: 115449 K (118016 K)]1734.075: [Rescan
>> (parallel) , 0.0131600 secs]1734.088: [weak refs processing, 0.0000130
>> secs] [1 CMS-remark: 12849K(21428K)] 128298K(139444K), 0.0132810 secs]
>> [Times: user=0.16 sys=0.00, real=0.01 secs]
>> 1734.089: [CMS-concurrent-sweep-start]
>> 1734.091: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1734.091: [CMS-concurrent-reset-start]
>> 1734.100: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1736.100: [GC [1 CMS-initial-mark: 12849K(21428K)] 128427K(139444K),
>> 0.0141000 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
>> 1736.115: [CMS-concurrent-mark-start]
>> 1736.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1736.131: [CMS-concurrent-preclean-start]
>> 1736.131: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1736.131: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1741.139:
>> [CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
>> user=0.70 sys=0.00, real=5.01 secs]
>> 1741.139: [GC[YG occupancy: 115897 K (118016 K)]1741.139: [Rescan
>> (parallel) , 0.0146880 secs]1741.154: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 12849K(21428K)] 128747K(139444K), 0.0148020 secs]
>> [Times: user=0.17 sys=0.00, real=0.02 secs]
>> 1741.154: [CMS-concurrent-sweep-start]
>> 1741.156: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1741.156: [CMS-concurrent-reset-start]
>> 1741.165: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1742.898: [GC [1 CMS-initial-mark: 12849K(21428K)] 129085K(139444K),
>> 0.0144050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1742.913: [CMS-concurrent-mark-start]
>> 1742.931: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1742.931: [CMS-concurrent-preclean-start]
>> 1742.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1742.932: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1748.016:
>> [CMS-concurrent-abortable-preclean: 0.728/5.084 secs] [Times:
>> user=0.73 sys=0.00, real=5.09 secs]
>> 1748.016: [GC[YG occupancy: 116596 K (118016 K)]1748.016: [Rescan
>> (parallel) , 0.0149950 secs]1748.031: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 129446K(139444K), 0.0150970 secs]
>> [Times: user=0.17 sys=0.00, real=0.01 secs]
>> 1748.031: [CMS-concurrent-sweep-start]
>> 1748.033: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1748.033: [CMS-concurrent-reset-start]
>> 1748.041: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1750.042: [GC [1 CMS-initial-mark: 12849K(21428K)] 129574K(139444K),
>> 0.0141840 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1750.056: [CMS-concurrent-mark-start]
>> 1750.073: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1750.073: [CMS-concurrent-preclean-start]
>> 1750.074: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1750.074: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1755.080:
>> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
>> user=0.70 sys=0.00, real=5.00 secs]
>> 1755.080: [GC[YG occupancy: 117044 K (118016 K)]1755.080: [Rescan
>> (parallel) , 0.0155560 secs]1755.096: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 12849K(21428K)] 129894K(139444K), 0.0156580 secs]
>> [Times: user=0.17 sys=0.00, real=0.02 secs]
>> 1755.096: [CMS-concurrent-sweep-start]
>> 1755.097: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1755.097: [CMS-concurrent-reset-start]
>> 1755.105: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1756.660: [GC 1756.660: [ParNew: 117108K->482K(118016K), 0.0081410
>> secs] 129958K->24535K(144568K), 0.0083030 secs] [Times: user=0.05
>> sys=0.01, real=0.01 secs]
>> 1756.668: [GC [1 CMS-initial-mark: 24053K(26552K)] 24599K(144568K),
>> 0.0015280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1756.670: [CMS-concurrent-mark-start]
>> 1756.688: [CMS-concurrent-mark: 0.016/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1756.688: [CMS-concurrent-preclean-start]
>> 1756.689: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1756.689: [GC[YG occupancy: 546 K (118016 K)]1756.689: [Rescan
>> (parallel) , 0.0018170 secs]1756.691: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(26552K)] 24599K(144568K), 0.0019050 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1756.691: [CMS-concurrent-sweep-start]
>> 1756.694: [CMS-concurrent-sweep: 0.004/0.004 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1756.694: [CMS-concurrent-reset-start]
>> 1756.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1758.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 25372K(158108K),
>> 0.0014030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1758.705: [CMS-concurrent-mark-start]
>> 1758.720: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
>> sys=0.00, real=0.01 secs]
>> 1758.720: [CMS-concurrent-preclean-start]
>> 1758.720: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.01 sys=0.00, real=0.00 secs]
>> 1758.721: [GC[YG occupancy: 1319 K (118016 K)]1758.721: [Rescan
>> (parallel) , 0.0014940 secs]1758.722: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 25372K(158108K), 0.0015850 secs]
>> [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1758.722: [CMS-concurrent-sweep-start]
>> 1758.726: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1758.726: [CMS-concurrent-reset-start]
>> 1758.735: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1760.735: [GC [1 CMS-initial-mark: 24053K(40092K)] 25565K(158108K),
>> 0.0014530 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1760.737: [CMS-concurrent-mark-start]
>> 1760.755: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1760.755: [CMS-concurrent-preclean-start]
>> 1760.755: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1760.756: [GC[YG occupancy: 1512 K (118016 K)]1760.756: [Rescan
>> (parallel) , 0.0014970 secs]1760.757: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 25565K(158108K), 0.0015980 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1760.757: [CMS-concurrent-sweep-start]
>> 1760.761: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1760.761: [CMS-concurrent-reset-start]
>> 1760.770: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1762.770: [GC [1 CMS-initial-mark: 24053K(40092K)] 25693K(158108K),
>> 0.0013680 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1762.772: [CMS-concurrent-mark-start]
>> 1762.788: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1762.788: [CMS-concurrent-preclean-start]
>> 1762.788: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1762.788: [GC[YG occupancy: 1640 K (118016 K)]1762.789: [Rescan
>> (parallel) , 0.0020360 secs]1762.791: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 25693K(158108K), 0.0021450 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1762.791: [CMS-concurrent-sweep-start]
>> 1762.794: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1762.794: [CMS-concurrent-reset-start]
>> 1762.803: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1764.804: [GC [1 CMS-initial-mark: 24053K(40092K)] 26747K(158108K),
>> 0.0014620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1764.805: [CMS-concurrent-mark-start]
>> 1764.819: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1764.819: [CMS-concurrent-preclean-start]
>> 1764.820: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1764.820: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1769.835:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.02 secs]
>> 1769.835: [GC[YG occupancy: 3015 K (118016 K)]1769.835: [Rescan
>> (parallel) , 0.0010360 secs]1769.836: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 27068K(158108K), 0.0011310 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1769.837: [CMS-concurrent-sweep-start]
>> 1769.840: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1769.840: [CMS-concurrent-reset-start]
>> 1769.849: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1771.850: [GC [1 CMS-initial-mark: 24053K(40092K)] 27196K(158108K),
>> 0.0014740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1771.851: [CMS-concurrent-mark-start]
>> 1771.868: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1771.868: [CMS-concurrent-preclean-start]
>> 1771.868: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1771.868: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1776.913:
>> [CMS-concurrent-abortable-preclean: 0.112/5.044 secs] [Times:
>> user=0.12 sys=0.00, real=5.04 secs]
>> 1776.913: [GC[YG occupancy: 4052 K (118016 K)]1776.913: [Rescan
>> (parallel) , 0.0017790 secs]1776.915: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 24053K(40092K)] 28105K(158108K), 0.0018790 secs]
>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1776.915: [CMS-concurrent-sweep-start]
>> 1776.918: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1776.918: [CMS-concurrent-reset-start]
>> 1776.927: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1778.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 28233K(158108K),
>> 0.0015470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1778.929: [CMS-concurrent-mark-start]
>> 1778.947: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1778.947: [CMS-concurrent-preclean-start]
>> 1778.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1778.947: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1783.963:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 1783.963: [GC[YG occupancy: 4505 K (118016 K)]1783.963: [Rescan
>> (parallel) , 0.0014480 secs]1783.965: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 28558K(158108K), 0.0015470 secs]
>> [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1783.965: [CMS-concurrent-sweep-start]
>> 1783.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1783.968: [CMS-concurrent-reset-start]
>> 1783.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1785.978: [GC [1 CMS-initial-mark: 24053K(40092K)] 28686K(158108K),
>> 0.0015760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1785.979: [CMS-concurrent-mark-start]
>> 1785.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1785.996: [CMS-concurrent-preclean-start]
>> 1785.996: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1785.996: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1791.009:
>> [CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 1791.010: [GC[YG occupancy: 4954 K (118016 K)]1791.010: [Rescan
>> (parallel) , 0.0020030 secs]1791.012: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 24053K(40092K)] 29007K(158108K), 0.0021040 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1791.012: [CMS-concurrent-sweep-start]
>> 1791.015: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1791.015: [CMS-concurrent-reset-start]
>> 1791.023: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1793.023: [GC [1 CMS-initial-mark: 24053K(40092K)] 29136K(158108K),
>> 0.0017200 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1793.025: [CMS-concurrent-mark-start]
>> 1793.044: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.08
>> sys=0.00, real=0.02 secs]
>> 1793.044: [CMS-concurrent-preclean-start]
>> 1793.045: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1793.045: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1798.137:
>> [CMS-concurrent-abortable-preclean: 0.112/5.093 secs] [Times:
>> user=0.11 sys=0.00, real=5.09 secs]
>> 1798.137: [GC[YG occupancy: 6539 K (118016 K)]1798.137: [Rescan
>> (parallel) , 0.0016650 secs]1798.139: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 24053K(40092K)] 30592K(158108K), 0.0017600 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1798.139: [CMS-concurrent-sweep-start]
>> 1798.143: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1798.143: [CMS-concurrent-reset-start]
>> 1798.152: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1800.152: [GC [1 CMS-initial-mark: 24053K(40092K)] 30721K(158108K),
>> 0.0016650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1800.154: [CMS-concurrent-mark-start]
>> 1800.170: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.01 secs]
>> 1800.170: [CMS-concurrent-preclean-start]
>> 1800.171: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1800.171: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1805.181:
>> [CMS-concurrent-abortable-preclean: 0.110/5.010 secs] [Times:
>> user=0.12 sys=0.00, real=5.01 secs]
>> 1805.181: [GC[YG occupancy: 8090 K (118016 K)]1805.181: [Rescan
>> (parallel) , 0.0018850 secs]1805.183: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 24053K(40092K)] 32143K(158108K), 0.0019860 secs]
>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1805.183: [CMS-concurrent-sweep-start]
>> 1805.187: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1805.187: [CMS-concurrent-reset-start]
>> 1805.196: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1807.196: [GC [1 CMS-initial-mark: 24053K(40092K)] 32272K(158108K),
>> 0.0018760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1807.198: [CMS-concurrent-mark-start]
>> 1807.216: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1807.216: [CMS-concurrent-preclean-start]
>> 1807.216: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1807.216: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1812.232:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 1812.232: [GC[YG occupancy: 8543 K (118016 K)]1812.232: [Rescan
>> (parallel) , 0.0020890 secs]1812.234: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 32596K(158108K), 0.0021910 secs]
>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1812.234: [CMS-concurrent-sweep-start]
>> 1812.238: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1812.238: [CMS-concurrent-reset-start]
>> 1812.247: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1812.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 32661K(158108K),
>> 0.0019710 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1812.930: [CMS-concurrent-mark-start]
>> 1812.947: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1812.947: [CMS-concurrent-preclean-start]
>> 1812.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1812.948: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1817.963:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 1817.963: [GC[YG occupancy: 8928 K (118016 K)]1817.963: [Rescan
>> (parallel) , 0.0011790 secs]1817.964: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 32981K(158108K), 0.0012750 secs]
>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1817.964: [CMS-concurrent-sweep-start]
>> 1817.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1817.968: [CMS-concurrent-reset-start]
>> 1817.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1819.977: [GC [1 CMS-initial-mark: 24053K(40092K)] 33110K(158108K),
>> 0.0018900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1819.979: [CMS-concurrent-mark-start]
>> 1819.996: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1819.997: [CMS-concurrent-preclean-start]
>> 1819.997: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1819.997: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1825.012:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 1825.013: [GC[YG occupancy: 9377 K (118016 K)]1825.013: [Rescan
>> (parallel) , 0.0020580 secs]1825.015: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 33431K(158108K), 0.0021510 secs]
>> [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 1825.015: [CMS-concurrent-sweep-start]
>> 1825.018: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1825.018: [CMS-concurrent-reset-start]
>> 1825.027: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1827.028: [GC [1 CMS-initial-mark: 24053K(40092K)] 33559K(158108K),
>> 0.0019140 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1827.030: [CMS-concurrent-mark-start]
>> 1827.047: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1827.047: [CMS-concurrent-preclean-start]
>> 1827.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1827.047: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1832.066:
>> [CMS-concurrent-abortable-preclean: 0.109/5.018 secs] [Times:
>> user=0.12 sys=0.00, real=5.02 secs]
>> 1832.066: [GC[YG occupancy: 9827 K (118016 K)]1832.066: [Rescan
>> (parallel) , 0.0019440 secs]1832.068: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 33880K(158108K), 0.0020410 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1832.068: [CMS-concurrent-sweep-start]
>> 1832.071: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1832.071: [CMS-concurrent-reset-start]
>> 1832.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1832.935: [GC [1 CMS-initial-mark: 24053K(40092K)] 34093K(158108K),
>> 0.0019830 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1832.937: [CMS-concurrent-mark-start]
>> 1832.954: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1832.954: [CMS-concurrent-preclean-start]
>> 1832.955: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1832.955: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1837.970:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 1837.970: [GC[YG occupancy: 10349 K (118016 K)]1837.970: [Rescan
>> (parallel) , 0.0019670 secs]1837.972: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 24053K(40092K)] 34402K(158108K), 0.0020800 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1837.972: [CMS-concurrent-sweep-start]
>> 1837.976: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1837.976: [CMS-concurrent-reset-start]
>> 1837.985: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1839.985: [GC [1 CMS-initial-mark: 24053K(40092K)] 34531K(158108K),
>> 0.0020220 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1839.987: [CMS-concurrent-mark-start]
>> 1840.005: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.06
>> sys=0.01, real=0.02 secs]
>> 1840.005: [CMS-concurrent-preclean-start]
>> 1840.006: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1840.006: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1845.018:
>> [CMS-concurrent-abortable-preclean: 0.106/5.012 secs] [Times:
>> user=0.10 sys=0.01, real=5.01 secs]
>> 1845.018: [GC[YG occupancy: 10798 K (118016 K)]1845.018: [Rescan
>> (parallel) , 0.0015500 secs]1845.019: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 34851K(158108K), 0.0016500 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1845.020: [CMS-concurrent-sweep-start]
>> 1845.023: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1845.023: [CMS-concurrent-reset-start]
>> 1845.032: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1847.032: [GC [1 CMS-initial-mark: 24053K(40092K)] 34980K(158108K),
>> 0.0020600 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1847.035: [CMS-concurrent-mark-start]
>> 1847.051: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.01 secs]
>> 1847.051: [CMS-concurrent-preclean-start]
>> 1847.052: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1847.052: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1852.067:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.02 secs]
>> 1852.067: [GC[YG occupancy: 11247 K (118016 K)]1852.067: [Rescan
>> (parallel) , 0.0011880 secs]1852.069: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 35300K(158108K), 0.0012900 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1852.069: [CMS-concurrent-sweep-start]
>> 1852.072: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1852.072: [CMS-concurrent-reset-start]
>> 1852.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1854.082: [GC [1 CMS-initial-mark: 24053K(40092K)] 35429K(158108K),
>> 0.0021010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1854.084: [CMS-concurrent-mark-start]
>> 1854.100: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1854.100: [CMS-concurrent-preclean-start]
>> 1854.101: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1854.101: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1859.116:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.02 secs]
>> 1859.116: [GC[YG occupancy: 11701 K (118016 K)]1859.117: [Rescan
>> (parallel) , 0.0010230 secs]1859.118: [weak refs processing, 0.0000130
>> secs] [1 CMS-remark: 24053K(40092K)] 35754K(158108K), 0.0011230 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1859.118: [CMS-concurrent-sweep-start]
>> 1859.121: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1859.121: [CMS-concurrent-reset-start]
>> 1859.130: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1861.131: [GC [1 CMS-initial-mark: 24053K(40092K)] 35882K(158108K),
>> 0.0021240 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1861.133: [CMS-concurrent-mark-start]
>> 1861.149: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1861.149: [CMS-concurrent-preclean-start]
>> 1861.150: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1861.150: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1866.220:
>> [CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
>> user=0.12 sys=0.00, real=5.07 secs]
>> 1866.220: [GC[YG occupancy: 12388 K (118016 K)]1866.220: [Rescan
>> (parallel) , 0.0027090 secs]1866.223: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 36441K(158108K), 0.0028070 secs]
>> [Times: user=0.02 sys=0.00, real=0.01 secs]
>> 1866.223: [CMS-concurrent-sweep-start]
>> 1866.227: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1866.227: [CMS-concurrent-reset-start]
>> 1866.236: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1868.236: [GC [1 CMS-initial-mark: 24053K(40092K)] 36569K(158108K),
>> 0.0023650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1868.239: [CMS-concurrent-mark-start]
>> 1868.256: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1868.256: [CMS-concurrent-preclean-start]
>> 1868.257: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1868.257: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1873.267:
>> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
>> user=0.13 sys=0.00, real=5.01 secs]
>> 1873.268: [GC[YG occupancy: 12837 K (118016 K)]1873.268: [Rescan
>> (parallel) , 0.0018720 secs]1873.270: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 36890K(158108K), 0.0019730 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1873.270: [CMS-concurrent-sweep-start]
>> 1873.273: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1873.273: [CMS-concurrent-reset-start]
>> 1873.282: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1875.283: [GC [1 CMS-initial-mark: 24053K(40092K)] 37018K(158108K),
>> 0.0024410 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1875.285: [CMS-concurrent-mark-start]
>> 1875.302: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1875.302: [CMS-concurrent-preclean-start]
>> 1875.302: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1875.303: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1880.318:
>> [CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.02 secs]
>> 1880.318: [GC[YG occupancy: 13286 K (118016 K)]1880.318: [Rescan
>> (parallel) , 0.0023860 secs]1880.321: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 24053K(40092K)] 37339K(158108K), 0.0024910 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1880.321: [CMS-concurrent-sweep-start]
>> 1880.324: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1880.324: [CMS-concurrent-reset-start]
>> 1880.333: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 1882.334: [GC [1 CMS-initial-mark: 24053K(40092K)] 37467K(158108K),
>> 0.0024090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1882.336: [CMS-concurrent-mark-start]
>> 1882.352: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1882.352: [CMS-concurrent-preclean-start]
>> 1882.353: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1882.353: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1887.368:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.02 secs]
>> 1887.368: [GC[YG occupancy: 13739 K (118016 K)]1887.368: [Rescan
>> (parallel) , 0.0022370 secs]1887.370: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 37792K(158108K), 0.0023360 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1887.371: [CMS-concurrent-sweep-start]
>> 1887.374: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1887.374: [CMS-concurrent-reset-start]
>> 1887.383: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1889.384: [GC [1 CMS-initial-mark: 24053K(40092K)] 37920K(158108K),
>> 0.0024690 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1889.386: [CMS-concurrent-mark-start]
>> 1889.404: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1889.404: [CMS-concurrent-preclean-start]
>> 1889.405: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.01 sys=0.00, real=0.00 secs]
>> 1889.405: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1894.488:
>> [CMS-concurrent-abortable-preclean: 0.112/5.083 secs] [Times:
>> user=0.11 sys=0.00, real=5.08 secs]
>> 1894.488: [GC[YG occupancy: 14241 K (118016 K)]1894.488: [Rescan
>> (parallel) , 0.0020670 secs]1894.490: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 38294K(158108K), 0.0021630 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1894.490: [CMS-concurrent-sweep-start]
>> 1894.494: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1894.494: [CMS-concurrent-reset-start]
>> 1894.503: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1896.503: [GC [1 CMS-initial-mark: 24053K(40092K)] 38422K(158108K),
>> 0.0025430 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1896.506: [CMS-concurrent-mark-start]
>> 1896.524: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1896.524: [CMS-concurrent-preclean-start]
>> 1896.525: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1896.525: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1901.540:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 1901.540: [GC[YG occupancy: 14690 K (118016 K)]1901.540: [Rescan
>> (parallel) , 0.0014810 secs]1901.542: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 38743K(158108K), 0.0015820 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1901.542: [CMS-concurrent-sweep-start]
>> 1901.545: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1901.545: [CMS-concurrent-reset-start]
>> 1901.555: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1903.555: [GC [1 CMS-initial-mark: 24053K(40092K)] 38871K(158108K),
>> 0.0025990 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1903.558: [CMS-concurrent-mark-start]
>> 1903.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1903.575: [CMS-concurrent-preclean-start]
>> 1903.576: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1903.576: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1908.586:
>> [CMS-concurrent-abortable-preclean: 0.105/5.010 secs] [Times:
>> user=0.10 sys=0.00, real=5.01 secs]
>> 1908.587: [GC[YG occupancy: 15207 K (118016 K)]1908.587: [Rescan
>> (parallel) , 0.0026240 secs]1908.589: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 39260K(158108K), 0.0027260 secs]
>> [Times: user=0.01 sys=0.00, real=0.00 secs]
>> 1908.589: [CMS-concurrent-sweep-start]
>> 1908.593: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1908.593: [CMS-concurrent-reset-start]
>> 1908.602: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1910.602: [GC [1 CMS-initial-mark: 24053K(40092K)] 39324K(158108K),
>> 0.0025610 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1910.605: [CMS-concurrent-mark-start]
>> 1910.621: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1910.621: [CMS-concurrent-preclean-start]
>> 1910.622: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.01 sys=0.00, real=0.00 secs]
>> 1910.622: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1915.684:
>> [CMS-concurrent-abortable-preclean: 0.112/5.062 secs] [Times:
>> user=0.11 sys=0.00, real=5.07 secs]
>> 1915.684: [GC[YG occupancy: 15592 K (118016 K)]1915.684: [Rescan
>> (parallel) , 0.0023940 secs]1915.687: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 24053K(40092K)] 39645K(158108K), 0.0025050 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1915.687: [CMS-concurrent-sweep-start]
>> 1915.690: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1915.690: [CMS-concurrent-reset-start]
>> 1915.699: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1917.700: [GC [1 CMS-initial-mark: 24053K(40092K)] 39838K(158108K),
>> 0.0025010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1917.702: [CMS-concurrent-mark-start]
>> 1917.719: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1917.719: [CMS-concurrent-preclean-start]
>> 1917.719: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1917.719: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1922.735:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.01, real=5.02 secs]
>> 1922.735: [GC[YG occupancy: 16198 K (118016 K)]1922.735: [Rescan
>> (parallel) , 0.0028750 secs]1922.738: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 40251K(158108K), 0.0029760 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1922.738: [CMS-concurrent-sweep-start]
>> 1922.741: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1922.741: [CMS-concurrent-reset-start]
>> 1922.751: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1922.957: [GC [1 CMS-initial-mark: 24053K(40092K)] 40324K(158108K),
>> 0.0027380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1922.960: [CMS-concurrent-mark-start]
>> 1922.978: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1922.978: [CMS-concurrent-preclean-start]
>> 1922.979: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1922.979: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1927.994:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.02 secs]
>> 1927.995: [GC[YG occupancy: 16645 K (118016 K)]1927.995: [Rescan
>> (parallel) , 0.0013210 secs]1927.996: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 40698K(158108K), 0.0017610 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1927.996: [CMS-concurrent-sweep-start]
>> 1928.000: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1928.000: [CMS-concurrent-reset-start]
>> 1928.009: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1930.009: [GC [1 CMS-initial-mark: 24053K(40092K)] 40826K(158108K),
>> 0.0028310 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1930.012: [CMS-concurrent-mark-start]
>> 1930.028: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1930.028: [CMS-concurrent-preclean-start]
>> 1930.029: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1930.029: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1935.044:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 1935.045: [GC[YG occupancy: 17098 K (118016 K)]1935.045: [Rescan
>> (parallel) , 0.0015440 secs]1935.046: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 24053K(40092K)] 41151K(158108K), 0.0016490 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1935.046: [CMS-concurrent-sweep-start]
>> 1935.050: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1935.050: [CMS-concurrent-reset-start]
>> 1935.059: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1937.059: [GC [1 CMS-initial-mark: 24053K(40092K)] 41279K(158108K),
>> 0.0028290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1937.062: [CMS-concurrent-mark-start]
>> 1937.079: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1937.079: [CMS-concurrent-preclean-start]
>> 1937.079: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1937.079: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1942.095:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.01, real=5.02 secs]
>> 1942.095: [GC[YG occupancy: 17547 K (118016 K)]1942.095: [Rescan
>> (parallel) , 0.0030270 secs]1942.098: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 41600K(158108K), 0.0031250 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1942.098: [CMS-concurrent-sweep-start]
>> 1942.101: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1942.101: [CMS-concurrent-reset-start]
>> 1942.111: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1944.111: [GC [1 CMS-initial-mark: 24053K(40092K)] 41728K(158108K),
>> 0.0028080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1944.114: [CMS-concurrent-mark-start]
>> 1944.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1944.130: [CMS-concurrent-preclean-start]
>> 1944.131: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1944.131: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1949.146:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.02 secs]
>> 1949.146: [GC[YG occupancy: 17996 K (118016 K)]1949.146: [Rescan
>> (parallel) , 0.0028800 secs]1949.149: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 24053K(40092K)] 42049K(158108K), 0.0029810 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1949.149: [CMS-concurrent-sweep-start]
>> 1949.152: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1949.152: [CMS-concurrent-reset-start]
>> 1949.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1951.162: [GC [1 CMS-initial-mark: 24053K(40092K)] 42177K(158108K),
>> 0.0028760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1951.165: [CMS-concurrent-mark-start]
>> 1951.184: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1951.184: [CMS-concurrent-preclean-start]
>> 1951.184: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1951.184: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1956.244:
>> [CMS-concurrent-abortable-preclean: 0.112/5.059 secs] [Times:
>> user=0.11 sys=0.01, real=5.05 secs]
>> 1956.244: [GC[YG occupancy: 18498 K (118016 K)]1956.244: [Rescan
>> (parallel) , 0.0019760 secs]1956.246: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 42551K(158108K), 0.0020750 secs]
>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>> 1956.246: [CMS-concurrent-sweep-start]
>> 1956.249: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1956.249: [CMS-concurrent-reset-start]
>> 1956.259: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1958.259: [GC [1 CMS-initial-mark: 24053K(40092K)] 42747K(158108K),
>> 0.0029160 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1958.262: [CMS-concurrent-mark-start]
>> 1958.279: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1958.279: [CMS-concurrent-preclean-start]
>> 1958.279: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1958.279: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1963.295:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 1963.295: [GC[YG occupancy: 18951 K (118016 K)]1963.295: [Rescan
>> (parallel) , 0.0020140 secs]1963.297: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 43004K(158108K), 0.0021100 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1963.297: [CMS-concurrent-sweep-start]
>> 1963.300: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1963.300: [CMS-concurrent-reset-start]
>> 1963.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1965.310: [GC [1 CMS-initial-mark: 24053K(40092K)] 43132K(158108K),
>> 0.0029420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1965.313: [CMS-concurrent-mark-start]
>> 1965.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1965.329: [CMS-concurrent-preclean-start]
>> 1965.330: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1965.330: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1970.345:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.02 secs]
>> 1970.345: [GC[YG occupancy: 19400 K (118016 K)]1970.345: [Rescan
>> (parallel) , 0.0031610 secs]1970.349: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 43453K(158108K), 0.0032580 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1970.349: [CMS-concurrent-sweep-start]
>> 1970.352: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1970.352: [CMS-concurrent-reset-start]
>> 1970.361: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1972.362: [GC [1 CMS-initial-mark: 24053K(40092K)] 43581K(158108K),
>> 0.0029960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1972.365: [CMS-concurrent-mark-start]
>> 1972.381: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 1972.381: [CMS-concurrent-preclean-start]
>> 1972.382: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1972.382: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1977.397:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.02 secs]
>> 1977.398: [GC[YG occupancy: 19849 K (118016 K)]1977.398: [Rescan
>> (parallel) , 0.0018110 secs]1977.399: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 43902K(158108K), 0.0019100 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1977.400: [CMS-concurrent-sweep-start]
>> 1977.403: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1977.403: [CMS-concurrent-reset-start]
>> 1977.412: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1979.413: [GC [1 CMS-initial-mark: 24053K(40092K)] 44031K(158108K),
>> 0.0030240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 1979.416: [CMS-concurrent-mark-start]
>> 1979.434: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
>> sys=0.00, real=0.02 secs]
>> 1979.434: [CMS-concurrent-preclean-start]
>> 1979.434: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1979.434: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1984.511:
>> [CMS-concurrent-abortable-preclean: 0.112/5.077 secs] [Times:
>> user=0.12 sys=0.00, real=5.07 secs]
>> 1984.511: [GC[YG occupancy: 20556 K (118016 K)]1984.511: [Rescan
>> (parallel) , 0.0032740 secs]1984.514: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 44609K(158108K), 0.0033720 secs]
>> [Times: user=0.03 sys=0.00, real=0.01 secs]
>> 1984.515: [CMS-concurrent-sweep-start]
>> 1984.518: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1984.518: [CMS-concurrent-reset-start]
>> 1984.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1986.528: [GC [1 CMS-initial-mark: 24053K(40092K)] 44737K(158108K),
>> 0.0032890 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1986.531: [CMS-concurrent-mark-start]
>> 1986.548: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 1986.548: [CMS-concurrent-preclean-start]
>> 1986.548: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1986.548: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1991.564:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.02 secs]
>> 1991.564: [GC[YG occupancy: 21005 K (118016 K)]1991.564: [Rescan
>> (parallel) , 0.0022540 secs]1991.566: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 45058K(158108K), 0.0023650 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 1991.566: [CMS-concurrent-sweep-start]
>> 1991.570: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 1991.570: [CMS-concurrent-reset-start]
>> 1991.579: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 1993.579: [GC [1 CMS-initial-mark: 24053K(40092K)] 45187K(158108K),
>> 0.0032480 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 1993.583: [CMS-concurrent-mark-start]
>> 1993.599: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 1993.599: [CMS-concurrent-preclean-start]
>> 1993.600: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 1993.600: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 1998.688:
>> [CMS-concurrent-abortable-preclean: 0.112/5.089 secs] [Times:
>> user=0.10 sys=0.01, real=5.09 secs]
>> 1998.689: [GC[YG occupancy: 21454 K (118016 K)]1998.689: [Rescan
>> (parallel) , 0.0025510 secs]1998.691: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 45507K(158108K), 0.0026500 secs]
>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>> 1998.691: [CMS-concurrent-sweep-start]
>> 1998.695: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 1998.695: [CMS-concurrent-reset-start]
>> 1998.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 2000.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 45636K(158108K),
>> 0.0033350 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2000.708: [CMS-concurrent-mark-start]
>> 2000.726: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2000.726: [CMS-concurrent-preclean-start]
>> 2000.726: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2000.726: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2005.742:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.01 secs]
>> 2005.742: [GC[YG occupancy: 21968 K (118016 K)]2005.742: [Rescan
>> (parallel) , 0.0027300 secs]2005.745: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 46021K(158108K), 0.0028560 secs]
>> [Times: user=0.02 sys=0.01, real=0.01 secs]
>> 2005.745: [CMS-concurrent-sweep-start]
>> 2005.748: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2005.748: [CMS-concurrent-reset-start]
>> 2005.757: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.01, real=0.01 secs]
>> 2007.758: [GC [1 CMS-initial-mark: 24053K(40092K)] 46217K(158108K),
>> 0.0033290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2007.761: [CMS-concurrent-mark-start]
>> 2007.778: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 2007.778: [CMS-concurrent-preclean-start]
>> 2007.778: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2007.778: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2012.794:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.02 secs]
>> 2012.794: [GC[YG occupancy: 22421 K (118016 K)]2012.794: [Rescan
>> (parallel) , 0.0036890 secs]2012.798: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 46474K(158108K), 0.0037910 secs]
>> [Times: user=0.02 sys=0.01, real=0.00 secs]
>> 2012.798: [CMS-concurrent-sweep-start]
>> 2012.801: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2012.801: [CMS-concurrent-reset-start]
>> 2012.810: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2012.980: [GC [1 CMS-initial-mark: 24053K(40092K)] 46547K(158108K),
>> 0.0033990 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 2012.984: [CMS-concurrent-mark-start]
>> 2013.004: [CMS-concurrent-mark: 0.019/0.020 secs] [Times: user=0.06
>> sys=0.01, real=0.02 secs]
>> 2013.004: [CMS-concurrent-preclean-start]
>> 2013.005: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2013.005: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2018.020:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.01 secs]
>> 2018.020: [GC[YG occupancy: 22867 K (118016 K)]2018.020: [Rescan
>> (parallel) , 0.0025180 secs]2018.023: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 46920K(158108K), 0.0026190 secs]
>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>> 2018.023: [CMS-concurrent-sweep-start]
>> 2018.026: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 2018.026: [CMS-concurrent-reset-start]
>> 2018.036: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 2020.036: [GC [1 CMS-initial-mark: 24053K(40092K)] 47048K(158108K),
>> 0.0034020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2020.039: [CMS-concurrent-mark-start]
>> 2020.057: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2020.057: [CMS-concurrent-preclean-start]
>> 2020.058: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2020.058: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2025.073:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 2025.073: [GC[YG occupancy: 23316 K (118016 K)]2025.073: [Rescan
>> (parallel) , 0.0020110 secs]2025.075: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 47369K(158108K), 0.0021080 secs]
>> [Times: user=0.02 sys=0.00, real=0.00 secs]
>> 2025.075: [CMS-concurrent-sweep-start]
>> 2025.079: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2025.079: [CMS-concurrent-reset-start]
>> 2025.088: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2027.088: [GC [1 CMS-initial-mark: 24053K(40092K)] 47498K(158108K),
>> 0.0034100 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2027.092: [CMS-concurrent-mark-start]
>> 2027.108: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2027.108: [CMS-concurrent-preclean-start]
>> 2027.109: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2027.109: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2032.120:
>> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
>> user=0.10 sys=0.00, real=5.01 secs]
>> 2032.120: [GC[YG occupancy: 23765 K (118016 K)]2032.120: [Rescan
>> (parallel) , 0.0025970 secs]2032.123: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 47818K(158108K), 0.0026940 secs]
>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>> 2032.123: [CMS-concurrent-sweep-start]
>> 2032.126: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 2032.126: [CMS-concurrent-reset-start]
>> 2032.135: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 2034.136: [GC [1 CMS-initial-mark: 24053K(40092K)] 47951K(158108K),
>> 0.0034720 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2034.139: [CMS-concurrent-mark-start]
>> 2034.156: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2034.156: [CMS-concurrent-preclean-start]
>> 2034.156: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2034.156: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2039.171:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 2039.172: [GC[YG occupancy: 24218 K (118016 K)]2039.172: [Rescan
>> (parallel) , 0.0038590 secs]2039.176: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 48271K(158108K), 0.0039560 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 2039.176: [CMS-concurrent-sweep-start]
>> 2039.179: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2039.179: [CMS-concurrent-reset-start]
>> 2039.188: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2041.188: [GC [1 CMS-initial-mark: 24053K(40092K)] 48400K(158108K),
>> 0.0035110 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2041.192: [CMS-concurrent-mark-start]
>> 2041.209: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 2041.209: [CMS-concurrent-preclean-start]
>> 2041.209: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2041.209: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2046.268:
>> [CMS-concurrent-abortable-preclean: 0.108/5.058 secs] [Times:
>> user=0.12 sys=0.00, real=5.06 secs]
>> 2046.268: [GC[YG occupancy: 24813 K (118016 K)]2046.268: [Rescan
>> (parallel) , 0.0042050 secs]2046.272: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 48866K(158108K), 0.0043070 secs]
>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>> 2046.272: [CMS-concurrent-sweep-start]
>> 2046.275: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 2046.275: [CMS-concurrent-reset-start]
>> 2046.285: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2048.285: [GC [1 CMS-initial-mark: 24053K(40092K)] 48994K(158108K),
>> 0.0037700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2048.289: [CMS-concurrent-mark-start]
>> 2048.307: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2048.307: [CMS-concurrent-preclean-start]
>> 2048.307: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2048.307: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2053.323:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 2053.323: [GC[YG occupancy: 25262 K (118016 K)]2053.323: [Rescan
>> (parallel) , 0.0030780 secs]2053.326: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 49315K(158108K), 0.0031760 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 2053.326: [CMS-concurrent-sweep-start]
>> 2053.329: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2053.329: [CMS-concurrent-reset-start]
>> 2053.338: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2055.339: [GC [1 CMS-initial-mark: 24053K(40092K)] 49444K(158108K),
>> 0.0037730 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2055.343: [CMS-concurrent-mark-start]
>> 2055.359: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 2055.359: [CMS-concurrent-preclean-start]
>> 2055.360: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2055.360: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2060.373:
>> [CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 2060.373: [GC[YG occupancy: 25715 K (118016 K)]2060.373: [Rescan
>> (parallel) , 0.0037090 secs]2060.377: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 49768K(158108K), 0.0038110 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 2060.377: [CMS-concurrent-sweep-start]
>> 2060.380: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2060.380: [CMS-concurrent-reset-start]
>> 2060.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2062.390: [GC [1 CMS-initial-mark: 24053K(40092K)] 49897K(158108K),
>> 0.0037860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2062.394: [CMS-concurrent-mark-start]
>> 2062.410: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2062.410: [CMS-concurrent-preclean-start]
>> 2062.411: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2062.411: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2067.426:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.02 secs]
>> 2067.427: [GC[YG occupancy: 26231 K (118016 K)]2067.427: [Rescan
>> (parallel) , 0.0031980 secs]2067.430: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 50284K(158108K), 0.0033100 secs]
>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>> 2067.430: [CMS-concurrent-sweep-start]
>> 2067.433: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 2067.433: [CMS-concurrent-reset-start]
>> 2067.443: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2069.443: [GC [1 CMS-initial-mark: 24053K(40092K)] 50412K(158108K),
>> 0.0038060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 2069.447: [CMS-concurrent-mark-start]
>> 2069.465: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2069.465: [CMS-concurrent-preclean-start]
>> 2069.465: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2069.465: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2074.535:
>> [CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
>> user=0.12 sys=0.00, real=5.06 secs]
>> 2074.535: [GC[YG occupancy: 26749 K (118016 K)]2074.535: [Rescan
>> (parallel) , 0.0040450 secs]2074.539: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 50802K(158108K), 0.0041460 secs]
>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>> 2074.539: [CMS-concurrent-sweep-start]
>> 2074.543: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2074.543: [CMS-concurrent-reset-start]
>> 2074.552: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 2076.552: [GC [1 CMS-initial-mark: 24053K(40092K)] 50930K(158108K),
>> 0.0038960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 2076.556: [CMS-concurrent-mark-start]
>> 2076.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2076.575: [CMS-concurrent-preclean-start]
>> 2076.575: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2076.575: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2081.590:
>> [CMS-concurrent-abortable-preclean: 0.109/5.014 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 2081.590: [GC[YG occupancy: 27198 K (118016 K)]2081.590: [Rescan
>> (parallel) , 0.0042420 secs]2081.594: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 51251K(158108K), 0.0043450 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 2081.594: [CMS-concurrent-sweep-start]
>> 2081.597: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2081.597: [CMS-concurrent-reset-start]
>> 2081.607: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2083.607: [GC [1 CMS-initial-mark: 24053K(40092K)] 51447K(158108K),
>> 0.0038630 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2083.611: [CMS-concurrent-mark-start]
>> 2083.628: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2083.628: [CMS-concurrent-preclean-start]
>> 2083.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2083.628: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2088.642:
>> [CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 2088.642: [GC[YG occupancy: 27651 K (118016 K)]2088.642: [Rescan
>> (parallel) , 0.0031520 secs]2088.645: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 51704K(158108K), 0.0032520 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 2088.645: [CMS-concurrent-sweep-start]
>> 2088.649: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2088.649: [CMS-concurrent-reset-start]
>> 2088.658: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2090.658: [GC [1 CMS-initial-mark: 24053K(40092K)] 51832K(158108K),
>> 0.0039130 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2090.662: [CMS-concurrent-mark-start]
>> 2090.678: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 2090.678: [CMS-concurrent-preclean-start]
>> 2090.679: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2090.679: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2095.690:
>> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 2095.690: [GC[YG occupancy: 28100 K (118016 K)]2095.690: [Rescan
>> (parallel) , 0.0024460 secs]2095.693: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 52153K(158108K), 0.0025460 secs]
>> [Times: user=0.03 sys=0.00, real=0.00 secs]
>> 2095.693: [CMS-concurrent-sweep-start]
>> 2095.696: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 2095.696: [CMS-concurrent-reset-start]
>> 2095.705: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 2096.616: [GC [1 CMS-initial-mark: 24053K(40092K)] 53289K(158108K),
>> 0.0039340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2096.620: [CMS-concurrent-mark-start]
>> 2096.637: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 2096.637: [CMS-concurrent-preclean-start]
>> 2096.638: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2096.638: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2101.654:
>> [CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.01 secs]
>> 2101.654: [GC[YG occupancy: 29557 K (118016 K)]2101.654: [Rescan
>> (parallel) , 0.0034020 secs]2101.657: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 53610K(158108K), 0.0035000 secs]
>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>> 2101.657: [CMS-concurrent-sweep-start]
>> 2101.661: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2101.661: [CMS-concurrent-reset-start]
>> 2101.670: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2103.004: [GC [1 CMS-initial-mark: 24053K(40092K)] 53997K(158108K),
>> 0.0042590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2103.009: [CMS-concurrent-mark-start]
>> 2103.027: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2103.027: [CMS-concurrent-preclean-start]
>> 2103.028: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2103.028: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2108.043:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.10 sys=0.01, real=5.02 secs]
>> 2108.043: [GC[YG occupancy: 30385 K (118016 K)]2108.044: [Rescan
>> (parallel) , 0.0048950 secs]2108.048: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 54438K(158108K), 0.0049930 secs]
>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>> 2108.049: [CMS-concurrent-sweep-start]
>> 2108.052: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2108.052: [CMS-concurrent-reset-start]
>> 2108.061: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2110.062: [GC [1 CMS-initial-mark: 24053K(40092K)] 54502K(158108K),
>> 0.0042120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
>> 2110.066: [CMS-concurrent-mark-start]
>> 2110.084: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2110.084: [CMS-concurrent-preclean-start]
>> 2110.085: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2110.085: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2115.100:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 2115.101: [GC[YG occupancy: 30770 K (118016 K)]2115.101: [Rescan
>> (parallel) , 0.0049040 secs]2115.106: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 54823K(158108K), 0.0050080 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 2115.106: [CMS-concurrent-sweep-start]
>> 2115.109: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2115.109: [CMS-concurrent-reset-start]
>> 2115.118: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2117.118: [GC [1 CMS-initial-mark: 24053K(40092K)] 54952K(158108K),
>> 0.0042490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2117.123: [CMS-concurrent-mark-start]
>> 2117.139: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 2117.139: [CMS-concurrent-preclean-start]
>> 2117.140: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2117.140: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2122.155:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.02 secs]
>> 2122.155: [GC[YG occupancy: 31219 K (118016 K)]2122.155: [Rescan
>> (parallel) , 0.0036460 secs]2122.159: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 55272K(158108K), 0.0037440 secs]
>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>> 2122.159: [CMS-concurrent-sweep-start]
>> 2122.162: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2122.162: [CMS-concurrent-reset-start]
>> 2122.172: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2124.172: [GC [1 CMS-initial-mark: 24053K(40092K)] 55401K(158108K),
>> 0.0043010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 2124.176: [CMS-concurrent-mark-start]
>> 2124.195: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2124.195: [CMS-concurrent-preclean-start]
>> 2124.195: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2124.195: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2129.211:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.12 sys=0.00, real=5.01 secs]
>> 2129.211: [GC[YG occupancy: 31669 K (118016 K)]2129.211: [Rescan
>> (parallel) , 0.0049870 secs]2129.216: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 55722K(158108K), 0.0050860 secs]
>> [Times: user=0.04 sys=0.00, real=0.01 secs]
>> 2129.216: [CMS-concurrent-sweep-start]
>> 2129.219: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 2129.219: [CMS-concurrent-reset-start]
>> 2129.228: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2131.229: [GC [1 CMS-initial-mark: 24053K(40092K)] 55850K(158108K),
>> 0.0042340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2131.233: [CMS-concurrent-mark-start]
>> 2131.249: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2131.249: [CMS-concurrent-preclean-start]
>> 2131.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2131.249: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2136.292:
>> [CMS-concurrent-abortable-preclean: 0.108/5.042 secs] [Times:
>> user=0.11 sys=0.00, real=5.04 secs]
>> 2136.292: [GC[YG occupancy: 32174 K (118016 K)]2136.292: [Rescan
>> (parallel) , 0.0037250 secs]2136.296: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 56227K(158108K), 0.0038250 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 2136.296: [CMS-concurrent-sweep-start]
>> 2136.299: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2136.299: [CMS-concurrent-reset-start]
>> 2136.308: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2138.309: [GC [1 CMS-initial-mark: 24053K(40092K)] 56356K(158108K),
>> 0.0043040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2138.313: [CMS-concurrent-mark-start]
>> 2138.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
>> sys=0.01, real=0.02 secs]
>> 2138.329: [CMS-concurrent-preclean-start]
>> 2138.329: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2138.329: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2143.341:
>> [CMS-concurrent-abortable-preclean: 0.106/5.011 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 2143.341: [GC[YG occupancy: 32623 K (118016 K)]2143.341: [Rescan
>> (parallel) , 0.0038660 secs]2143.345: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 56676K(158108K), 0.0039760 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 2143.345: [CMS-concurrent-sweep-start]
>> 2143.349: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2143.349: [CMS-concurrent-reset-start]
>> 2143.358: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2145.358: [GC [1 CMS-initial-mark: 24053K(40092K)] 56805K(158108K),
>> 0.0043390 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2145.362: [CMS-concurrent-mark-start]
>> 2145.379: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 2145.379: [CMS-concurrent-preclean-start]
>> 2145.379: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2145.379: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2150.393:
>> [CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 2150.393: [GC[YG occupancy: 33073 K (118016 K)]2150.393: [Rescan
>> (parallel) , 0.0038190 secs]2150.397: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 57126K(158108K), 0.0039210 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 2150.397: [CMS-concurrent-sweep-start]
>> 2150.400: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2150.400: [CMS-concurrent-reset-start]
>> 2150.410: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2152.410: [GC [1 CMS-initial-mark: 24053K(40092K)] 57254K(158108K),
>> 0.0044080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2152.415: [CMS-concurrent-mark-start]
>> 2152.431: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 2152.431: [CMS-concurrent-preclean-start]
>> 2152.432: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2152.432: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2157.447:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.01, real=5.02 secs]
>> 2157.447: [GC[YG occupancy: 33522 K (118016 K)]2157.447: [Rescan
>> (parallel) , 0.0038130 secs]2157.451: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 57575K(158108K), 0.0039160 secs]
>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>> 2157.451: [CMS-concurrent-sweep-start]
>> 2157.454: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 2157.454: [CMS-concurrent-reset-start]
>> 2157.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 2159.464: [GC [1 CMS-initial-mark: 24053K(40092K)] 57707K(158108K),
>> 0.0045170 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2159.469: [CMS-concurrent-mark-start]
>> 2159.483: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
>> sys=0.00, real=0.01 secs]
>> 2159.483: [CMS-concurrent-preclean-start]
>> 2159.483: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2159.483: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2164.491:
>> [CMS-concurrent-abortable-preclean: 0.111/5.007 secs] [Times:
>> user=0.12 sys=0.00, real=5.01 secs]
>> 2164.491: [GC[YG occupancy: 34293 K (118016 K)]2164.491: [Rescan
>> (parallel) , 0.0052070 secs]2164.496: [weak refs processing, 0.0000120
>> secs] [1 CMS-remark: 24053K(40092K)] 58347K(158108K), 0.0053130 secs]
>> [Times: user=0.06 sys=0.00, real=0.01 secs]
>> 2164.496: [CMS-concurrent-sweep-start]
>> 2164.500: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2164.500: [CMS-concurrent-reset-start]
>> 2164.509: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.01, real=0.01 secs]
>> 2166.509: [GC [1 CMS-initial-mark: 24053K(40092K)] 58475K(158108K),
>> 0.0045900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2166.514: [CMS-concurrent-mark-start]
>> 2166.533: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
>> sys=0.00, real=0.02 secs]
>> 2166.533: [CMS-concurrent-preclean-start]
>> 2166.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2166.533: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2171.549:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.02 secs]
>> 2171.549: [GC[YG occupancy: 34743 K (118016 K)]2171.549: [Rescan
>> (parallel) , 0.0052200 secs]2171.554: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 58796K(158108K), 0.0053210 secs]
>> [Times: user=0.05 sys=0.00, real=0.01 secs]
>> 2171.554: [CMS-concurrent-sweep-start]
>> 2171.558: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2171.558: [CMS-concurrent-reset-start]
>> 2171.567: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2173.567: [GC [1 CMS-initial-mark: 24053K(40092K)] 58924K(158108K),
>> 0.0046700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
>> 2173.572: [CMS-concurrent-mark-start]
>> 2173.588: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
>> sys=0.00, real=0.02 secs]
>> 2173.588: [CMS-concurrent-preclean-start]
>> 2173.589: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2173.589: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2178.604:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.10 sys=0.01, real=5.02 secs]
>> 2178.605: [GC[YG occupancy: 35192 K (118016 K)]2178.605: [Rescan
>> (parallel) , 0.0041460 secs]2178.609: [weak refs processing, 0.0000110
>> secs] [1 CMS-remark: 24053K(40092K)] 59245K(158108K), 0.0042450 secs]
>> [Times: user=0.04 sys=0.00, real=0.00 secs]
>> 2178.609: [CMS-concurrent-sweep-start]
>> 2178.612: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
>> sys=0.00, real=0.00 secs]
>> 2178.612: [CMS-concurrent-reset-start]
>> 2178.622: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
>> sys=0.00, real=0.01 secs]
>> 2180.622: [GC [1 CMS-initial-mark: 24053K(40092K)] 59373K(158108K),
>> 0.0047200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
>> 2180.627: [CMS-concurrent-mark-start]
>> 2180.645: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
>> sys=0.00, real=0.02 secs]
>> 2180.645: [CMS-concurrent-preclean-start]
>> 2180.645: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
>> user=0.00 sys=0.00, real=0.00 secs]
>> 2180.645: [CMS-concurrent-abortable-preclean-start]
>> CMS: abort preclean due to time 2185.661:
>> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
>> user=0.11 sys=0.00, real=5.01 secs]
>> 2185.661: [GC[YG occupancy: 35645 K (118016 K)]2185.661: [Rescan
>> (parallel) , 0.0050730 secs]2185.666: [weak refs processing, 0.0000100
>> secs] [1 CMS-remark: 24053K(40092K)] 59698K(158108K), 0.0051720 secs]
>> [Times: user=0.04 sys=0.01, real=0.01 secs]
>> 2185.666: [CMS-concurrent-sweep-start]
>> 2185.670: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
>> sys=0.00, real=0.00 secs]
>> 2185.670: [CMS-concurrent-reset-start]
>> 2185.679: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
>> sys=0.00, real=0.01 secs]
>> 2187.679: [GC [1 CMS-initial-mark: 24053K(40092K)] 59826K(158108K),
>> 0.0047350 secs]
>>
>> --
>> gregross:)
>>
>
> --
>
>



-- 
gregross:)

Re: long garbage collecting pause

Posted by Michael Segel <mi...@hotmail.com>.
Have you implemented MSLABS?

On Oct 1, 2012, at 3:35 PM, Greg Ross <gr...@ngmoco.com> wrote:

> Hi,
> 
> I'm having difficulty with a mapreduce job that has reducers that read
> from and write to HBase, version 0.92.1, r1298924. Row sizes vary
> greatly. As do the number of cells, although the number of cells is
> typically numbered in the tens, at most. The max cell size is 1MB.
> 
> I see the following in the logs followed by the region server promptly
> shutting down:
> 
> 2012-10-01 19:08:47,858 [regionserver60020] WARN
> org.apache.hadoop.hbase.util.Sleeper: We slept 28970ms instead of
> 3000ms, this is likely due to a long garbage collecting pause and it's
> usually bad, see
> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> 
> The full logs, including GC are below.
> 
> Although new to HBase, I've read up on the likely GC issues and their
> remedies. I've implemented the recommended solutions and still to no
> avail.
> 
> Here's what I've tried:
> 
> (1) increased the RAM to 4G
> (2) set -XX:+UseConcMarkSweepGC
> (3) set -XX:+UseParNewGC
> (4) set -XX:CMSInitiatingOccupancyFraction=N where I've attempted N=[40..70]
> (5) I've called context.progress() in the reducer before and after
> reading and writing
> (6) memstore is enabled
> 
> Is there anything else that I might have missed?
> 
> Thanks,
> 
> Greg
> 
> 
> hbase logs
> ========
> 
> 2012-10-01 19:09:48,293
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/.tmp/d2ee47650b224189b0c27d1c20929c03
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> 2012-10-01 19:09:48,884
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 5 file(s) in U of
> orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
> into d2ee47650b224189b0c27d1c20929c03, size=723.0m; total size for
> store is 723.0m
> 2012-10-01 19:09:48,884
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.,
> storeName=U, fileCount=5, fileSize=1.4g, priority=2,
> time=10631266687564968; duration=35sec
> 2012-10-01 19:09:48,886
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
> 2012-10-01 19:09:48,887
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 5
> file(s) in U of
> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp,
> seqid=132201184, totalSize=1.4g
> 2012-10-01 19:10:04,191
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/.tmp/2e5534fea8b24eaf9cc1e05dea788c01
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> 2012-10-01 19:10:04,868
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 5 file(s) in U of
> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
> into 2e5534fea8b24eaf9cc1e05dea788c01, size=626.5m; total size for
> store is 626.5m
> 2012-10-01 19:10:04,868
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
> storeName=U, fileCount=5, fileSize=1.4g, priority=2,
> time=10631266696614208; duration=15sec
> 2012-10-01 19:14:04,992
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
> 2012-10-01 19:14:04,993
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp,
> seqid=132198830, totalSize=863.8m
> 2012-10-01 19:14:19,147
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/.tmp/b741f8501ad248418c48262d751f6e86
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/6ace1f2f0b7ad3e454f738d66255047f/U/b741f8501ad248418c48262d751f6e86
> 2012-10-01 19:14:19,381
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
> into b741f8501ad248418c48262d751f6e86, size=851.4m; total size for
> store is 851.4m
> 2012-10-01 19:14:19,381
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.,
> storeName=U, fileCount=2, fileSize=863.8m, priority=5,
> time=10631557965747111; duration=14sec
> 2012-10-01 19:14:19,381
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
> 2012-10-01 19:14:19,381
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp,
> seqid=132198819, totalSize=496.7m
> 2012-10-01 19:14:27,337
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/.tmp/78040c736c4149a884a1bdcda9916416
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9686d8348fe53644334c0423cc217d26/U/78040c736c4149a884a1bdcda9916416
> 2012-10-01 19:14:27,514
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
> into 78040c736c4149a884a1bdcda9916416, size=487.5m; total size for
> store is 487.5m
> 2012-10-01 19:14:27,514
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.,
> storeName=U, fileCount=3, fileSize=496.7m, priority=4,
> time=10631557966599560; duration=8sec
> 2012-10-01 19:14:27,514
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
> 2012-10-01 19:14:27,514
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp,
> seqid=132200816, totalSize=521.7m
> 2012-10-01 19:14:36,962
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/.tmp/0142b8bcdda948c185887358990af6d1
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/34b52e7208034f85db8d1e39ca6c1329/U/0142b8bcdda948c185887358990af6d1
> 2012-10-01 19:14:37,171
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
> into 0142b8bcdda948c185887358990af6d1, size=510.7m; total size for
> store is 510.7m
> 2012-10-01 19:14:37,171
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.,
> storeName=U, fileCount=3, fileSize=521.7m, priority=4,
> time=10631557967125617; duration=9sec
> 2012-10-01 19:14:37,172
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
> 2012-10-01 19:14:37,172
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp,
> seqid=132198832, totalSize=565.5m
> 2012-10-01 19:14:57,082
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/.tmp/44a27dce8df04306908579c22be76786
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/44449344385d98cd7512008dfa532f8e/U/44a27dce8df04306908579c22be76786
> 2012-10-01 19:14:57,429
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
> into 44a27dce8df04306908579c22be76786, size=557.7m; total size for
> store is 557.7m
> 2012-10-01 19:14:57,429
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.,
> storeName=U, fileCount=3, fileSize=565.5m, priority=4,
> time=10631557967207683; duration=20sec
> 2012-10-01 19:14:57,429
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
> 2012-10-01 19:14:57,430
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp,
> seqid=132199414, totalSize=845.6m
> 2012-10-01 19:16:54,394
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/.tmp/771813ba0c87449ebd99d5e7916244f8
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8adf268d4fcb494344745c14b090e773/U/771813ba0c87449ebd99d5e7916244f8
> 2012-10-01 19:16:54,636
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
> into 771813ba0c87449ebd99d5e7916244f8, size=827.3m; total size for
> store is 827.3m
> 2012-10-01 19:16:54,636
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.,
> storeName=U, fileCount=3, fileSize=845.6m, priority=4,
> time=10631557967560440; duration=1mins, 57sec
> 2012-10-01 19:16:54,636
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
> 2012-10-01 19:16:54,637
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp,
> seqid=132198824, totalSize=1012.4m
> 2012-10-01 19:17:35,610
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/.tmp/771a4124c763468c8dea927cb53887ee
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/b9f56cf1f6f6c7b0cdf2a07a3d36846b/U/771a4124c763468c8dea927cb53887ee
> 2012-10-01 19:17:35,874
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
> into 771a4124c763468c8dea927cb53887ee, size=974.0m; total size for
> store is 974.0m
> 2012-10-01 19:17:35,875
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.,
> storeName=U, fileCount=3, fileSize=1012.4m, priority=4,
> time=10631557967678796; duration=41sec
> 2012-10-01 19:17:35,875
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
> 2012-10-01 19:17:35,875
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp,
> seqid=132198815, totalSize=530.5m
> 2012-10-01 19:17:47,481
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/.tmp/24328f8244f747bf8ba81b74ef2893fa
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/506f6865d3167d722fec947a59761822/U/24328f8244f747bf8ba81b74ef2893fa
> 2012-10-01 19:17:47,741
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
> into 24328f8244f747bf8ba81b74ef2893fa, size=524.0m; total size for
> store is 524.0m
> 2012-10-01 19:17:47,741
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.,
> storeName=U, fileCount=3, fileSize=530.5m, priority=4,
> time=10631557967807915; duration=11sec
> 2012-10-01 19:17:47,741
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
> 2012-10-01 19:17:47,741
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp,
> seqid=132201190, totalSize=529.3m
> 2012-10-01 19:17:58,031
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/.tmp/cae48d1b96eb4440a7bcd5fa3b4c070b
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/bf64051a387fc2970252a1c8919dfd88/U/cae48d1b96eb4440a7bcd5fa3b4c070b
> 2012-10-01 19:17:58,232
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
> into cae48d1b96eb4440a7bcd5fa3b4c070b, size=521.3m; total size for
> store is 521.3m
> 2012-10-01 19:17:58,232
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.,
> storeName=U, fileCount=3, fileSize=529.3m, priority=4,
> time=10631557967959079; duration=10sec
> 2012-10-01 19:17:58,232
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
> 2012-10-01 19:17:58,232
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3
> file(s) in U of
> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp,
> seqid=132199205, totalSize=475.2m
> 2012-10-01 19:18:06,764
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/.tmp/ba51afdc860048b6b2e1047b06fb3b29
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31a1dcefef4d5e3133b323cdaac918d7/U/ba51afdc860048b6b2e1047b06fb3b29
> 2012-10-01 19:18:07,065
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 3 file(s) in U of
> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
> into ba51afdc860048b6b2e1047b06fb3b29, size=474.5m; total size for
> store is 474.5m
> 2012-10-01 19:18:07,065
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.,
> storeName=U, fileCount=3, fileSize=475.2m, priority=4,
> time=10631557968104570; duration=8sec
> 2012-10-01 19:18:07,065
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
> 2012-10-01 19:18:07,065
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp,
> seqid=132198822, totalSize=522.5m
> 2012-10-01 19:18:18,306
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/.tmp/7a0bd16b11f34887b2690e9510071bf0
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/fb85da01ca228de9f9ac6ffa63416e9b/U/7a0bd16b11f34887b2690e9510071bf0
> 2012-10-01 19:18:18,439
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
> into 7a0bd16b11f34887b2690e9510071bf0, size=520.0m; total size for
> store is 520.0m
> 2012-10-01 19:18:18,440
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.,
> storeName=U, fileCount=2, fileSize=522.5m, priority=5,
> time=10631557965863914; duration=11sec
> 2012-10-01 19:18:18,440
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
> 2012-10-01 19:18:18,440
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp,
> seqid=132198823, totalSize=548.0m
> 2012-10-01 19:18:32,288
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/.tmp/dcd050acc2e747738a90aebaae8920e4
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f0198c0a2a34f18da689910235a9b0e2/U/dcd050acc2e747738a90aebaae8920e4
> 2012-10-01 19:18:32,431
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
> into dcd050acc2e747738a90aebaae8920e4, size=528.2m; total size for
> store is 528.2m
> 2012-10-01 19:18:32,431
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.,
> storeName=U, fileCount=2, fileSize=548.0m, priority=5,
> time=10631557966071838; duration=13sec
> 2012-10-01 19:18:32,431
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
> 2012-10-01 19:18:32,431
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp,
> seqid=132199001, totalSize=475.9m
> 2012-10-01 19:18:43,154
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/.tmp/15a9167cd9754fd4b3674fe732648a03
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/88fa75b4719f4b83b9165474139c4a94/U/15a9167cd9754fd4b3674fe732648a03
> 2012-10-01 19:18:43,322
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
> into 15a9167cd9754fd4b3674fe732648a03, size=475.9m; total size for
> store is 475.9m
> 2012-10-01 19:18:43,322
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.,
> storeName=U, fileCount=2, fileSize=475.9m, priority=5,
> time=10631557966273447; duration=10sec
> 2012-10-01 19:18:43,322
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
> 2012-10-01 19:18:43,322
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp,
> seqid=132198833, totalSize=824.8m
> 2012-10-01 19:19:00,252
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/.tmp/bf8da91da0824a909f684c3ecd0ee8da
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/8c5f4903b82a4ff64ff1638c95692b60/U/bf8da91da0824a909f684c3ecd0ee8da
> 2012-10-01 19:19:00,788
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
> into bf8da91da0824a909f684c3ecd0ee8da, size=803.0m; total size for
> store is 803.0m
> 2012-10-01 19:19:00,788
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.,
> storeName=U, fileCount=2, fileSize=824.8m, priority=5,
> time=10631557966382580; duration=17sec
> 2012-10-01 19:19:00,788
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
> 2012-10-01 19:19:00,788
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp,
> seqid=132198810, totalSize=565.3m
> 2012-10-01 19:19:11,311
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/.tmp/5cd2032f48bc4287b8866165dcb6d3e6
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/74a852e5b4186edd51ca714bd77f80c0/U/5cd2032f48bc4287b8866165dcb6d3e6
> 2012-10-01 19:19:11,504
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
> into 5cd2032f48bc4287b8866165dcb6d3e6, size=553.5m; total size for
> store is 553.5m
> 2012-10-01 19:19:11,504
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.,
> storeName=U, fileCount=2, fileSize=565.3m, priority=5,
> time=10631557966480961; duration=10sec
> 2012-10-01 19:19:11,504
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
> 2012-10-01 19:19:11,504
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp,
> seqid=132198825, totalSize=519.6m
> 2012-10-01 19:19:22,186
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/.tmp/6f29b3b15f1747c196ac9aa79f4835b1
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/2088f23f8fb1dbc67b972f8744aca289/U/6f29b3b15f1747c196ac9aa79f4835b1
> 2012-10-01 19:19:22,437
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
> into 6f29b3b15f1747c196ac9aa79f4835b1, size=512.7m; total size for
> store is 512.7m
> 2012-10-01 19:19:22,437
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.,
> storeName=U, fileCount=2, fileSize=519.6m, priority=5,
> time=10631557966769107; duration=10sec
> 2012-10-01 19:19:22,437
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
> 2012-10-01 19:19:22,437
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp,
> seqid=132198836, totalSize=528.3m
> 2012-10-01 19:19:34,752
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/.tmp/d836630f7e2b4212848d7e4edc7238f1
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/cd7e7eb88967b3dcb223de9c4ad807a9/U/d836630f7e2b4212848d7e4edc7238f1
> 2012-10-01 19:19:34,945
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
> into d836630f7e2b4212848d7e4edc7238f1, size=504.3m; total size for
> store is 504.3m
> 2012-10-01 19:19:34,945
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.,
> storeName=U, fileCount=2, fileSize=528.3m, priority=5,
> time=10631557967026388; duration=12sec
> 2012-10-01 19:19:34,945
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
> 2012-10-01 19:19:34,945
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp,
> seqid=132198841, totalSize=813.8m
> 2012-10-01 19:19:49,303
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/.tmp/c70692c971cd4e899957f9d5b189372e
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/f81d0498ab42b400f37a48d4f3854006/U/c70692c971cd4e899957f9d5b189372e
> 2012-10-01 19:19:49,428
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
> into c70692c971cd4e899957f9d5b189372e, size=813.7m; total size for
> store is 813.7m
> 2012-10-01 19:19:49,428
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.,
> storeName=U, fileCount=2, fileSize=813.8m, priority=5,
> time=10631557967436197; duration=14sec
> 2012-10-01 19:19:49,428
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
> 2012-10-01 19:19:49,429
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp,
> seqid=132198642, totalSize=812.0m
> 2012-10-01 19:20:38,718
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/.tmp/bf99f97891ed42f7847a11cfb8f46438
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/31c8b60bb6ad6840de937a28e3482101/U/bf99f97891ed42f7847a11cfb8f46438
> 2012-10-01 19:20:38,825
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
> into bf99f97891ed42f7847a11cfb8f46438, size=811.3m; total size for
> store is 811.3m
> 2012-10-01 19:20:38,825
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.,
> storeName=U, fileCount=2, fileSize=812.0m, priority=5,
> time=10631557968183922; duration=49sec
> 2012-10-01 19:20:38,826
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
> 2012-10-01 19:20:38,826
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp,
> seqid=132198138, totalSize=588.7m
> 2012-10-01 19:20:48,274
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/.tmp/9f44b7eeab58407ca98bb4ec90126035
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/c08d16db6188bd8cec100eeb1291d5b9/U/9f44b7eeab58407ca98bb4ec90126035
> 2012-10-01 19:20:48,383
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
> into 9f44b7eeab58407ca98bb4ec90126035, size=573.4m; total size for
> store is 573.4m
> 2012-10-01 19:20:48,383
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.,
> storeName=U, fileCount=2, fileSize=588.7m, priority=5,
> time=10631557968302831; duration=9sec
> 2012-10-01 19:20:48,383
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
> 2012-10-01 19:20:48,383
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp,
> seqid=132198644, totalSize=870.8m
> 2012-10-01 19:21:04,998
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/.tmp/920844c25b1847c6ac4b880e8cf1d5b0
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/326405233b2b444691860b14ef587f78/U/920844c25b1847c6ac4b880e8cf1d5b0
> 2012-10-01 19:21:05,107
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
> into 920844c25b1847c6ac4b880e8cf1d5b0, size=869.0m; total size for
> store is 869.0m
> 2012-10-01 19:21:05,107
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.,
> storeName=U, fileCount=2, fileSize=870.8m, priority=5,
> time=10631557968521590; duration=16sec
> 2012-10-01 19:21:05,107
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
> 2012-10-01 19:21:05,107
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp,
> seqid=132198622, totalSize=885.3m
> 2012-10-01 19:21:27,231
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/.tmp/c85d413975d642fc914253bd08f3484f
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/99012cf45da4109e6b570e8b0178852c/U/c85d413975d642fc914253bd08f3484f
> 2012-10-01 19:21:27,791
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
> into c85d413975d642fc914253bd08f3484f, size=848.3m; total size for
> store is 848.3m
> 2012-10-01 19:21:27,791
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.,
> storeName=U, fileCount=2, fileSize=885.3m, priority=5,
> time=10631557968628383; duration=22sec
> 2012-10-01 19:21:27,791
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on U
> in region orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
> 2012-10-01 19:21:27,791
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 2
> file(s) in U of
> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
> into tmpdir=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp,
> seqid=132198621, totalSize=796.5m
> 2012-10-01 19:21:42,374
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at
> hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/.tmp/ce543c630dd142309af6dca2a9ab5786
> to hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/621ebefbdb194a82d6314ff0f58b67b1/U/ce543c630dd142309af6dca2a9ab5786
> 2012-10-01 19:21:42,515
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.Store: Completed major compaction
> of 2 file(s) in U of
> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
> into ce543c630dd142309af6dca2a9ab5786, size=795.5m; total size for
> store is 795.5m
> 2012-10-01 19:21:42,516
> [regionserver60020-largeCompactions-1348577979539] INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.,
> storeName=U, fileCount=2, fileSize=796.5m, priority=5,
> time=10631557968713853; duration=14sec
> 2012-10-01 19:49:58,159 [ResponseProcessor for block
> blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor
> exception  for block
> blk_5535637699691880681_51616301java.io.EOFException
>    at java.io.DataInputStream.readFully(DataInputStream.java:180)
>    at java.io.DataInputStream.readLong(DataInputStream.java:399)
>    at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$PipelineAck.readFields(DataTransferProtocol.java:120)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2634)
> 
> 2012-10-01 19:49:58,167 [IPC Server handler 87 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
> {"processingtimems":46208,"client":"10.100.102.155:38534","timeRange":[0,9223372036854775807],"starttimems":1349120951956,"responsesize":329939,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00322994","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
> 2012-10-01 19:49:58,160
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
> not heard from server in 56633ms for sessionid 0x137ec64368509f7,
> closing socket connection and attempting reconnect
> 2012-10-01 19:49:58,160 [regionserver60020] WARN
> org.apache.hadoop.hbase.util.Sleeper: We slept 49116ms instead of
> 3000ms, this is likely due to a long garbage collecting pause and it's
> usually bad, see
> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> 2012-10-01 19:49:58,160
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have
> not heard from server in 53359ms for sessionid 0x137ec64368509f6,
> closing socket connection and attempting reconnect
> 2012-10-01 19:49:58,320 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] INFO
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 waiting for responder to exit.
> 2012-10-01 19:49:58,380 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:49:58,380 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:49:59,113 [regionserver60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: Unhandled
> exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
> rejected; currently processing
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
> org.apache.hadoop.hbase.YouAreDeadException:
> org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
> currently processing
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
>    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:797)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:688)
>    at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hbase.YouAreDeadException: Server REPORT rejected;
> currently processing
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
>    at org.apache.hadoop.hbase.master.ServerManager.checkIsDead(ServerManager.java:222)
>    at org.apache.hadoop.hbase.master.ServerManager.regionServerReport(ServerManager.java:148)
>    at org.apache.hadoop.hbase.master.HMaster.regionServerReport(HMaster.java:844)
>    at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:918)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>    at $Proxy8.regionServerReport(Unknown Source)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:794)
>    ... 2 more
> 2012-10-01 19:49:59,114 [regionserver60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:49:59,397 [IPC Server handler 36 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: (operationTooSlow):
> {"processingtimems":47521,"client":"10.100.102.176:60221","timeRange":[0,9223372036854775807],"starttimems":1349120951875,"responsesize":699312,"class":"HRegionServer","table":"orwell_events","cacheBlocks":true,"families":{"U":["ALL"]},"row":"00318223","queuetimems":0,"method":"get","totalColumns":1,"maxVersions":1}
> 2012-10-01 19:50:00,355 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:00,355
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
> server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
> 2012-10-01 19:50:00,356
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section
> 'Client' could not be found. If you are not using SASL, you may ignore
> this. On the other hand, if you expected SASL to work, please fix your
> JAAS configuration.
> 2012-10-01 19:50:00,356 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 1 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
> retry...
> 2012-10-01 19:50:00,357
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
> session
> 2012-10-01 19:50:00,358
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
> server; r-o mode will be unavailable
> 2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
> expired from ZooKeeper, aborting
> org.apache.zookeeper.KeeperException$SessionExpiredException:
> KeeperErrorCode = Session expired
>    at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.connectionEvent(ZooKeeperWatcher.java:374)
>    at org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:271)
>    at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:521)
>    at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:497)
> 2012-10-01 19:50:00,359
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
> service, session 0x137ec64368509f6 has expired, closing socket
> connection
> 2012-10-01 19:50:00,359 [regionserver60020-EventThread] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:00,367 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:00,367 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1201,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:00,381
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to
> server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181
> 2012-10-01 19:50:00,401 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Unhandled
> exception: org.apache.hadoop.hbase.YouAreDeadException: Server REPORT
> rejected; currently processing
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272 as dead server
> 2012-10-01 19:50:00,403
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section
> 'Client' could not be found. If you are not using SASL, you may ignore
> this. On the other hand, if you expected SASL to work, please fix your
> JAAS configuration.
> 2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6
> regionserver:60020-0x137ec64368509f6-0x137ec64368509f6 received
> expired from ZooKeeper, aborting
> 2012-10-01 19:50:00,412 [regionserver60020-EventThread] INFO
> org.apache.zookeeper.ClientCnxn: EventThread shut down
> 2012-10-01 19:50:00,412 [regionserver60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Stopping server on 60020
> 2012-10-01 19:50:00,413
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
> session
> 2012-10-01 19:50:00,413 [IPC Server handler 9 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 20 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 2 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 10 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server listener on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on
> 60020
> 2012-10-01 19:50:00,413 [IPC Server handler 12 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 21 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 13 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 19 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 22 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 11 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: Sending interrupt
> to stop the worker thread
> 2012-10-01 19:50:00,414 [IPC Server handler 6 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Stopping
> infoServer
> 2012-10-01 19:50:00,414 [IPC Server handler 0 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 28 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 7 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 15 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 5 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 48 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 14 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
> exiting
> 2012-10-01 19:50:00,413 [IPC Server handler 18 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 37 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 47 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 50 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 45 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 36 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 43 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 42 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 38 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 8 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 40 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 34 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 4 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
> exiting
> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@5fa9b60a,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320394"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.117:56438: output error
> 2012-10-01 19:50:00,414 [IPC Server handler 61 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59104
> remote=/10.100.101.156:50010]. 59988 millis timeout left.
>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.fill(FSInputChecker.java:176)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:193)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1243)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020
> caught: java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:00,415 [IPC Server handler 44 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 31 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
> exiting
> 2012-10-01 19:50:00,414
> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
> SplitLogWorker interrupted while waiting for task, exiting:
> java.lang.InterruptedException
> 2012-10-01 19:50:00,563
> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272]
> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
> SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
> exiting
> 2012-10-01 19:50:00,414 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 3201413024070455305:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59115
> remote=/10.100.101.156:50010]. 59999 millis timeout left.
>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>    at java.io.DataInputStream.readShort(DataInputStream.java:295)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,563 [IPC Server Responder] INFO
> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
> 2012-10-01 19:50:00,414 [IPC Server handler 27 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
> exiting
> 2012-10-01 19:50:00,414
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
> server; r-o mode will be unavailable
> 2012-10-01 19:50:00,414 [IPC Server handler 55 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block -2144655386884254555:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59108
> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1350)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,649
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Unable to reconnect to ZooKeeper
> service, session 0x137ec64368509f7 has expired, closing socket
> connection
> 2012-10-01 19:50:00,414 [IPC Server handler 39 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.173:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
> for block -2100467641393578191:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:48825
> remote=/10.100.102.173:50010]. 60000 millis timeout left.
>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,414 [IPC Server handler 26 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -5183799322211896791:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59078
> remote=/10.100.101.156:50010]. 59949 millis timeout left.
>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,414 [IPC Server handler 85 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -5183799322211896791:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59082
> remote=/10.100.101.156:50010]. 59950 millis timeout left.
>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,414 [IPC Server handler 57 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -1763662403960466408:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59116
> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>    at java.io.DataInputStream.readShort(DataInputStream.java:295)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,649 [IPC Server handler 79 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,649 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> This client just lost it's session with ZooKeeper, trying to
> reconnect.
> 2012-10-01 19:50:00,649 [IPC Server handler 89 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,649 [PRI IPC Server handler 3 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
> exiting
> 2012-10-01 19:50:00,649 [PRI IPC Server handler 0 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
> exiting
> 2012-10-01 19:50:00,700 [IPC Server handler 56 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
> exiting
> 2012-10-01 19:50:00,649 [PRI IPC Server handler 2 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 54 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
> exiting
> 2012-10-01 19:50:00,563 [IPC Server Responder] INFO
> org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder
> 2012-10-01 19:50:00,701 [IPC Server handler 71 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 79 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.193:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,563 [IPC Server handler 16 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,701 [PRI IPC Server handler 9 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
> exiting
> 2012-10-01 19:50:00,563 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,415 [IPC Server handler 60 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@7eee7b96,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321525"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.125:49043: output error
> 2012-10-01 19:50:00,704 [IPC Server handler 3 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 6550563574061266649:java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,717 [IPC Server handler 49 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 94 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 83 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [PRI IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [PRI IPC Server handler 7 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 82 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [PRI IPC Server handler 6 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
> exiting
> 2012-10-01 19:50:00,719 [IPC Server handler 16 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.107:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,701 [IPC Server handler 74 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
> exiting
> 2012-10-01 19:50:00,719 [IPC Server handler 86 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
> exiting
> 2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020
> caught: java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:00,721 [IPC Server handler 60 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [PRI IPC Server handler 5 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [regionserver60020] INFO org.mortbay.log:
> Stopped SelectChannelConnector@0.0.0.0:60030
> 2012-10-01 19:50:00,722 [IPC Server handler 35 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 16 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.133:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 4946845190538507957:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,722 [IPC Server handler 98 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 68 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
> exiting
> 2012-10-01 19:50:00,701 [IPC Server handler 64 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
> exiting
> 2012-10-01 19:50:00,673 [IPC Server handler 33 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 76 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
> exiting
> 2012-10-01 19:50:00,673 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Trying to reconnect to zookeeper
> 2012-10-01 19:50:00,736 [IPC Server handler 84 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 95 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 75 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 92 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 88 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 67 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 30 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 80 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 62 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 52 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 32 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 97 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 96 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 93 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 73 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
> exiting
> 2012-10-01 19:50:00,722 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.47:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,722 [IPC Server handler 87 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
> exiting
> 2012-10-01 19:50:00,721 [IPC Server handler 81 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
> exiting
> 2012-10-01 19:50:00,721 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,721 [IPC Server handler 59 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block -9081461281107361903:java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,719 [IPC Server handler 65 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
> exiting
> 2012-10-01 19:50:00,721 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedChannelException
>    at java.nio.channels.spi.AbstractSelectableChannel.configureBlocking(AbstractSelectableChannel.java:252)
>    at org.apache.hadoop.net.SocketIOWithTimeout.<init>(SocketIOWithTimeout.java:66)
>    at org.apache.hadoop.net.SocketInputStream$Reader.<init>(SocketInputStream.java:50)
>    at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:73)
>    at org.apache.hadoop.net.SocketInputStream.<init>(SocketInputStream.java:91)
>    at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:323)
>    at org.apache.hadoop.net.NetUtils.getInputStream(NetUtils.java:299)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1474)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,721 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59074
> remote=/10.100.101.156:50010]. 59947 millis timeout left.
>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,811 [IPC Server handler 59 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.135:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,719 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59107
> remote=/10.100.101.156:50010]. 60000 millis timeout left.
>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,831 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.153:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,719 [IPC Server handler 39 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.144:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
> for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,719 [IPC Server handler 26 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.138:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,852 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.174:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,719 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block 5946486101046455013:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59091
> remote=/10.100.101.156:50010]. 59953 millis timeout left.
>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,719 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.148:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,719 [IPC Server handler 53 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
> exiting
> 2012-10-01 19:50:00,719 [IPC Server handler 79 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.154:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 2209451090614340242:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,719 [IPC Server handler 89 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.47:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,719 [IPC Server handler 46 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 4946845190538507957:java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[connected local=/10.100.101.156:59113
> remote=/10.100.101.156:50010]. 59999 millis timeout left.
>    at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>    at java.io.DataInputStream.readShort(DataInputStream.java:295)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1478)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2041)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,895 [IPC Server handler 26 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.139:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,701 [IPC Server handler 91 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 3 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.114:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 6550563574061266649:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.134:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,717 [PRI IPC Server handler 4 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 77 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [PRI IPC Server handler 8 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 99 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 85 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.138:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -5183799322211896791:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,717 [IPC Server handler 51 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 57 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.138:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,717 [IPC Server handler 55 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.180:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,717 [IPC Server handler 70 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
> exiting
> 2012-10-01 19:50:00,717 [IPC Server handler 61 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.174:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.173:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block -2144655386884254555:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,705 [IPC Server handler 23 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,705 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,704 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:270)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
>    at java.io.DataInputStream.read(DataInputStream.java:132)
>    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readChunk(DFSClient.java:1389)
>    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
>    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
>    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1249)
>    at org.apache.hadoop.fs.FSInputChecker.readFully(FSInputChecker.java:384)
>    at org.apache.hadoop.hdfs.DFSClient$BlockReader.readAll(DFSClient.java:1522)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2047)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.97:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.144:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,904 [regionserver60020-EventThread] INFO
> org.apache.zookeeper.ZooKeeper: Initiating client connection,
> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
> sessionTimeout=180000 watcher=hconnection
> 2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.72:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,904 [IPC Server handler 55 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-2144655386884254555_51616216 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,904 [IPC Server handler 57 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.144:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block -1763662403960466408:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,901 [IPC Server handler 85 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-5183799322211896791_51616591 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,899 [IPC Server handler 89 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_5937357897784147544_51616546 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,899 [IPC Server handler 3 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_6550563574061266649_51616152 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,896 [IPC Server handler 46 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_4946845190538507957_51616628 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,896 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.133:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,896 [IPC Server handler 26 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-5183799322211896791_51616591 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,896 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.175:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,895 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.97:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,894 [IPC Server handler 39 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.151:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/f6f6314e944144b7a752222f83f33ede
> for block -2100467641393578191:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,894 [IPC Server handler 79 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_2209451090614340242_51616188 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,857 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.101:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,856 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.134:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,839 [IPC Server handler 59 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.194:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block -9081461281107361903:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,811 [IPC Server handler 16 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_4946845190538507957_51616628 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,787 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.134:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,780 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.134:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,736 [IPC Server handler 63 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 72 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
> exiting
> 2012-10-01 19:50:00,736 [IPC Server handler 78 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
> exiting
> 2012-10-01 19:50:00,906 [IPC Server handler 59 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-9081461281107361903_51616031 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,906 [IPC Server handler 39 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-2100467641393578191_51531005 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,906 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.145:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,905 [IPC Server handler 57 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-1763662403960466408_51616605 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,905 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.162:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,904 [IPC Server handler 23 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_-1763662403960466408_51616605 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,904 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.72:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:00,904 [IPC Server handler 61 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_1768076108943205533_51616106 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:00,941 [regionserver60020-SendThread()] INFO
> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
> /10.100.102.197:2181
> 2012-10-01 19:50:00,941 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
> of this process is 20776@data3024.ngpipes.milp.ngmoco.com
> 2012-10-01 19:50:00,942
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section
> 'Client' could not be found. If you are not using SASL, you may ignore
> this. On the other hand, if you expected SASL to work, please fix your
> JAAS configuration.
> 2012-10-01 19:50:00,943
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
> session
> 2012-10-01 19:50:00,962
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
> server; r-o mode will be unavailable
> 2012-10-01 19:50:00,962
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
> sessionid = 0x137ec64373dd4b3, negotiated timeout = 40000
> 2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Reconnected successfully. This disconnect could have been caused by a
> network partition or a long-running GC pause, either way it's
> recommended that you verify your environment.
> 2012-10-01 19:50:00,971 [regionserver60020-EventThread] INFO
> org.apache.zookeeper.ClientCnxn: EventThread shut down
> 2012-10-01 19:50:01,018 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,018 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.133:50010 for file
> /hbase/orwell_events/6463149a16179d4e44c19bb49e4b4a81/U/021d9dde273e4e60ac3f8a1411a206be
> for block 5946486101046455013:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:185)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:111)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:83)
>    at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1721)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.<init>(HRegion.java:2865)
>    at org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:1434)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1426)
>    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1402)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3692)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,019 [IPC Server handler 41 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_5946486101046455013_51616031 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:01,020 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.162:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 2851854722247682142:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,021 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,023 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.47:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,023 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.47:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,024 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.174:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,024 [IPC Server handler 61 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@20c6e4bc,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321393"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.118:57165: output error
> 2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:01,038 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.134:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 5937357897784147544:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,038 [IPC Server handler 61 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
> exiting
> 2012-10-01 19:50:01,038 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.148:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.97:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 1768076108943205533:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.153:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,039 [IPC Server handler 66 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_1768076108943205533_51616106 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:01,039 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.102.101:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,041 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.156:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,042 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.153:50010 for file
> /hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
> for block 8387547514055202675:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,044 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed to connect to
> /10.100.101.175:50010 for file
> /hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
> for block 3201413024070455305:java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2035)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 
> 2012-10-01 19:50:01,090 [IPC Server handler 29 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00321084/U:BAHAMUTIOS_1/1348883706322/Put,
> lastKey=00324324/U:user/1348900694793/Put, avgKeyLen=31,
> avgValueLen=125185, entries=6053, length=758129544,
> cur=00321312/U:KINGDOMSQUESTSIPAD_2/1349024761759/Put/vlen=460950]
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_8387547514055202675_51616042
> file=/hbase/orwell_events/a9906c96a91bb8d7e62a7a528bf0ea5c/U/d2ee47650b224189b0c27d1c20929c03
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    ... 17 more
> 2012-10-01 19:50:01,091 [IPC Server handler 24 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00318964/U:user/1349118541276/Put/vlen=311046]
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_2851854722247682142_51616579
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    ... 14 more
> 2012-10-01 19:50:01,091 [IPC Server handler 1 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=0032027/U:KINGDOMSQUESTS_10/1349118531396/Put/vlen=401149]
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_3201413024070455305_51616611
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    ... 14 more
> 2012-10-01 19:50:01,091 [IPC Server handler 25 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00319173/U:TINYTOWERANDROID_3/1349024232716/Put/vlen=129419]
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_2851854722247682142_51616579
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    ... 14 more
> 2012-10-01 19:50:01,091 [IPC Server handler 90 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00316914/U:PETCAT_2/1349118542022/Put/vlen=499140]
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_5937357897784147544_51616546
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    ... 14 more
> 2012-10-01 19:50:01,091 [IPC Server handler 17 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00317054/U:BAHAMUTIOS_4/1348869430278/Put/vlen=104012]
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_5937357897784147544_51616546
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:200)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    ... 17 more
> 2012-10-01 19:50:01,091 [IPC Server handler 58 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not iterate StoreFileScanner[HFileScanner
> for reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00316983/U:TINYTOWERANDROID_1/1349118439250/Put/vlen=417924]
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:104)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:106)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:289)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_5937357897784147544_51616546
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readNextDataBlock(HFileReaderV2.java:452)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:416)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:99)
>    ... 14 more
> 2012-10-01 19:50:01,091 [IPC Server handler 89 on 60020] ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:
> java.io.IOException: Could not seek StoreFileScanner[HFileScanner for
> reader reader=hdfs://namenode301.ngpipes.milp.ngmoco.com:9000/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01,
> compression=none, cacheConf=CacheConfig:enabled [cacheDataOnRead=true]
> [cacheDataOnWrite=false] [cacheIndexesOnWrite=false]
> [cacheBloomsOnWrite=false] [cacheEvictOnClose=false]
> [cacheCompressed=false],
> firstKey=00316914/U:PETCAT_1/1349118541277/Put,
> lastKey=00321083/U:user/1349024170056/Put, avgKeyLen=31,
> avgValueLen=89140, entries=7365, length=656954017,
> cur=00317043/U:BAHAMUTANDROID_7/1348968079952/Put/vlen=419212]
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:131)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap$SeekType$1.seek(KeyValueHeap.java:57)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:277)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:248)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:436)
>    at org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:322)
>    at org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:138)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:2945)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2901)
>    at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:2918)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3693)
>    at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3585)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1785)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Could not obtain block:
> blk_5937357897784147544_51616546
> file=/hbase/orwell_events/9740f22a42e9e8b6aca3966c0173e680/U/2e5534fea8b24eaf9cc1e05dea788c01
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1993)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2028)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2116)
>    at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1034)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderV2.readBlockData(HFileBlock.java:1318)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:266)
>    at org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.seekToDataBlock(HFileBlockIndex.java:209)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.seekTo(HFileReaderV2.java:519)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.reseekTo(HFileReaderV2.java:557)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:194)
>    at org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:124)
>    ... 17 more
> 2012-10-01 19:50:01,094 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,094 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,093 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,093 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,092 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,092 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,091 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.ipc.Client: interrupted waiting to send params to
> server
> java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:01,095 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:50:01,097 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:50:01,115 [IPC Server handler 39 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@2743ecf8,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00390925"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.122:51758: output error
> 2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:01,139 [IPC Server handler 39 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
> exiting
> 2012-10-01 19:50:01,151 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:01,151 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 2 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
> retry...
> 2012-10-01 19:50:01,153 [IPC Server handler 89 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@7137feec,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317043"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.68:55302: output error
> 2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:01,154 [IPC Server handler 89 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
> exiting
> 2012-10-01 19:50:01,156 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@6b9a9eba,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321504"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.176:32793: output error
> 2012-10-01 19:50:01,157 [IPC Server handler 66 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:01,158 [IPC Server handler 66 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
> exiting
> 2012-10-01 19:50:01,159 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@586761c,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00391525"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.155:39850: output error
> 2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:01,160 [IPC Server handler 41 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
> exiting
> 2012-10-01 19:50:01,216 [regionserver60020.compactionChecker] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer$CompactionChecker:
> regionserver60020.compactionChecker exiting
> 2012-10-01 19:50:01,216 [regionserver60020.logRoller] INFO
> org.apache.hadoop.hbase.regionserver.LogRoller: LogRoller exiting.
> 2012-10-01 19:50:01,216 [regionserver60020.cacheFlusher] INFO
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
> regionserver60020.cacheFlusher exiting
> 2012-10-01 19:50:01,217 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: aborting server
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
> 2012-10-01 19:50:01,218 [regionserver60020] INFO
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> Closed zookeeper sessionid=0x137ec64373dd4b3
> 2012-10-01 19:50:01,270
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,24294294,1349027918385.068e6f4f7b8a81fb21e49fe3ac47f262.
> 2012-10-01 19:50:01,271
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96510144,1348960969795.fe2a133a17d09a65a6b0d4fb60e6e051.
> 2012-10-01 19:50:01,272
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56499174,1349027424070.7f767ca333bef3dcdacc9a6c673a8350.
> 2012-10-01 19:50:01,273
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96515494,1348960969795.8ab4e1d9f4e4c388f3f4f18eec637e8a.
> 2012-10-01 19:50:01,273
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,98395724,1348969940123.08188cc246bf752c17cfe57f99970924.
> 2012-10-01 19:50:01,274
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56469014,1348940247961.6ace1f2f0b7ad3e454f738d66255047f.
> 2012-10-01 19:50:01,275
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56604984,1348940650040.14639a082062e98abfea8ae3fff5d2c7.
> 2012-10-01 19:50:01,275
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56880144,1348969971950.ece85a086a310aacc2da259a3303e67e.
> 2012-10-01 19:50:01,276
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56447305,1349027937173.fb85da01ca228de9f9ac6ffa63416e9b.
> 2012-10-01 19:50:01,277
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,31267284,1348961229728.fc429276c44f5c274f00168f12128bad.
> 2012-10-01 19:50:01,278
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56569824,1348940809479.9808dac5b895fc9b8f9892c4b72b3804.
> 2012-10-01 19:50:01,279
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56425354,1349031095620.e4965f2e57729ff9537986da3e19258c.
> 2012-10-01 19:50:01,280
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96504305,1348964001164.77f75cf8ba76ebc4417d49f019317d0a.
> 2012-10-01 19:50:01,280
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,60743825,1348962513777.f377f704db5f0d000e36003338e017b1.
> 2012-10-01 19:50:01,283
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,09603014,1349026790546.d634bfe659bdf2f45ec89e53d2d38791.
> 2012-10-01 19:50:01,283
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,31274021,1348961229728.e93382b458a84c22f2e5aeb9efa737b5.
> 2012-10-01 19:50:01,285
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56462454,1348982699951.a2dafbd054bf65aa6f558dc9a2d839a1.
> 2012-10-01 19:50:01,286
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> Orwell,48814673,1348270987327.29818ea19d62126d5616a7ba7d7dae21.
> 2012-10-01 19:50:01,288
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56610954,1348940650040.3609c1bfc2be6936577b6be493e7e8d9.
> 2012-10-01 19:50:01,289
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56511684,1349027965795.f0198c0a2a34f18da689910235a9b0e2.
> 2012-10-01 19:50:01,289
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,05205763,1348941089603.957ea0e428ba6ff21174ecdda96f9fdc.
> 2012-10-01 19:50:01,289
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56349615,1348941138879.dfabbd25c59fd6c34a58d9eacf4c096f.
> 2012-10-01 19:50:01,292
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56503505,1349027424070.129160a78f13c17cc9ea16ff3757cda9.
> 2012-10-01 19:50:01,292
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,91248264,1348942310344.a93982b8f91f260814885bc0afb4fbb9.
> 2012-10-01 19:50:01,293
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,98646724,1348980566403.a4f2a16d1278ad1246068646c4886502.
> 2012-10-01 19:50:01,293
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56454594,1348982903997.7107c6a1b2117fb59f68210ce82f2cc9.
> 2012-10-01 19:50:01,294
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56564144,1348940809479.636092bb3ec2615b115257080427d091.
> 2012-10-01 19:50:01,295
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_user_events,06252594,1348582793143.499f0a0f4704afa873c83f141f5e0324.
> 2012-10-01 19:50:01,296
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56617164,1348941287729.3992a80a6648ab62753b4998331dcfdf.
> 2012-10-01 19:50:01,296
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,98390944,1348969940123.af160e450632411818fa8d01b2c2ed0b.
> 2012-10-01 19:50:01,297
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56703743,1348941223663.5cc2fcb82080dbf14956466c31f1d27c.
> 2012-10-01 19:50:01,297
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56427793,1349031095620.88fa75b4719f4b83b9165474139c4a94.
> 2012-10-01 19:50:01,298
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56693584,1348942631318.f01b179c1fad1f18b97b37fc8f730898.
> 2012-10-01 19:50:01,299
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_user_events,12140615,1348582250428.7822f7f5ceea852b04b586fdf34debff.
> 2012-10-01 19:50:01,300
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56388374,1348941251489.8c5f4903b82a4ff64ff1638c95692b60.
> 2012-10-01 19:50:01,300
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96420705,1348942597601.a063e06eb840ee49bb88474ee8e22160.
> 2012-10-01 19:50:01,300
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56414764,1349027912141.74a852e5b4186edd51ca714bd77f80c0.
> 2012-10-01 19:50:01,300
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96432674,1348961425148.1a793cf2137b9599193a1e2d5d9749c5.
> 2012-10-01 19:50:01,302
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56329744,1349028887914.9686d8348fe53644334c0423cc217d26.
> 2012-10-01 19:50:01,303
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,44371574,1348961840615.00f5b4710a43f2ee75d324bebb054323.
> 2012-10-01 19:50:01,304
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,562fc921,1348941189517.cff261c585416844113f232960c8d6b4.
> 2012-10-01 19:50:01,304
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56323831,1348941216581.0b0f3bdb03ce9e4f58156a4143018e0e.
> 2012-10-01 19:50:01,305
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56480194,1349028080664.03a7046ffcec7e1f19cdb2f9890a353e.
> 2012-10-01 19:50:01,306
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56418294,1348940288044.c872be05981c047e8c1ee4765b92a74d.
> 2012-10-01 19:50:01,306
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,53590305,1348940776419.4c98d7846622f2d8dad4e998dae81d2b.
> 2012-10-01 19:50:01,307
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96445963,1348942353563.66a0f602720191bf21a1dfd12eec4a35.
> 2012-10-01 19:50:01,307
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,5649502,1349027928183.2088f23f8fb1dbc67b972f8744aca289.
> 2012-10-01 19:50:01,307
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56305294,1348941189517.20f67941294c259e2273d3e0b7ae5198.
> 2012-10-01 19:50:01,308
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56516115,1348981132325.0f753cb87c1163d95d9d10077d6308db.
> 2012-10-01 19:50:01,309
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56796924,1348941269761.843e0aee0b15d67b810c7b3fe5a2dda7.
> 2012-10-01 19:50:01,309
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56440004,1348941150045.7033cb81a66e405d7bf45cd55ab010e3.
> 2012-10-01 19:50:01,309
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56317864,1348941124299.0de45283aa626fc83b2c026e1dd8bfec.
> 2012-10-01 19:50:01,310
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56809673,1348941834500.08244d4ed5f7fdf6d9ac9c73fbfd3947.
> 2012-10-01 19:50:01,310
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56894864,1348970959541.fc19a6ffe18f29203369d32ad1b102ce.
> 2012-10-01 19:50:01,311
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56382491,1348940876960.2392137bf0f4cb695c08c0fb22ce5294.
> 2012-10-01 19:50:01,312
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,95128264,1349026585563.5dc569af8afe0a84006b80612c15007f.
> 2012-10-01 19:50:01,312
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,5631146,1348941124299.b7c10be9855b5e8ba3a76852920627f9.
> 2012-10-01 19:50:01,312
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56710424,1348940462668.a370c149c232ebf4427e070eb28079bc.
> 2012-10-01 19:50:01,314 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Session: 0x137ec64373dd4b3 closed
> 2012-10-01 19:50:01,314 [regionserver60020-EventThread] INFO
> org.apache.zookeeper.ClientCnxn: EventThread shut down
> 2012-10-01 19:50:01,314 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 78
> regions to close
> 2012-10-01 19:50:01,317
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96497834,1348964001164.0b12f37b74b2124ef9f27d1ef0ebb17a.
> 2012-10-01 19:50:01,318
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56507574,1349027965795.79113c51d318a11286b39397ebbfdf04.
> 2012-10-01 19:50:01,319
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,24297525,1349027918385.047533f3d801709a26c895a01dcc1a73.
> 2012-10-01 19:50:01,320
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96439694,1348961425148.038e0e43a6e56760e4daae6f34bfc607.
> 2012-10-01 19:50:01,320
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,82811715,1348904784424.88fae4279f9806bef745d90f7ad37241.
> 2012-10-01 19:50:01,321
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56699434,1348941223663.ef3ccf0af60ee87450806b393f89cb6e.
> 2012-10-01 19:50:01,321
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56487344,1349027988535.cd7e7eb88967b3dcb223de9c4ad807a9.
> 2012-10-01 19:50:01,322
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56601774,1349029123008.34b52e7208034f85db8d1e39ca6c1329.
> 2012-10-01 19:50:01,322
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56586234,1349028973106.44449344385d98cd7512008dfa532f8e.
> 2012-10-01 19:50:01,323
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56465563,1348982699951.f34a29c0c4fc32e753d12db996ccc995.
> 2012-10-01 19:50:01,324
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56450734,1349027937173.c70110b3573a48299853117c4287c7be.
> 2012-10-01 19:50:01,325
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56361984,1349029457686.6c8d6974741e59df971da91c7355de1c.
> 2012-10-01 19:50:01,327
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56814705,1348962077056.69fd74167a3c5c2961e45d339b962ca9.
> 2012-10-01 19:50:01,327
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,00389105,1348978080963.6463149a16179d4e44c19bb49e4b4a81.
> 2012-10-01 19:50:01,329
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56558944,1348940893836.03bd1c0532949ec115ca8d5215dbb22f.
> 2012-10-01 19:50:01,330 [IPC Server handler 59 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@112ba2bf,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00392783"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.135:34935: output error
> 2012-10-01 19:50:01,330
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,5658955,1349027142822.e65d0c1f452cb41d47ad08560c653607.
> 2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:01,331
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56402364,1349049689267.27b452f3bcce0815b7bf92370cbb51de.
> 2012-10-01 19:50:01,331 [IPC Server handler 59 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
> exiting
> 2012-10-01 19:50:01,332
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96426544,1348942597601.addf704f99dd1b2e07b3eff505e2c811.
> 2012-10-01 19:50:01,333
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,60414161,1348962852909.c6b1b21f00bbeef8648c4b9b3d28b49a.
> 2012-10-01 19:50:01,333
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56552794,1348940893836.5314886f88f6576e127757faa25cef7c.
> 2012-10-01 19:50:01,335
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56910924,1348962040261.fdedae86206fc091a72dde52a3d0d0b4.
> 2012-10-01 19:50:01,335
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56720084,1349029064698.ee5cb00ab358be0d2d36c59189da32f8.
> 2012-10-01 19:50:01,336
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56624533,1348941287729.6121fce2c31d4754b4ad4e855d85b501.
> 2012-10-01 19:50:01,336
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56899934,1348970959541.f34f01dd65e293cb6ab13de17ac91eec.
> 2012-10-01 19:50:01,337
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56394773,1348941251489.f81d0498ab42b400f37a48d4f3854006.
> 2012-10-01 19:50:01,337
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56405923,1349049689267.bb4be5396608abeff803400cdd2408f4.
> 2012-10-01 19:50:01,338
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56364924,1349029457686.1e1c09b6eb734d8ad48ea0b4fa103381.
> 2012-10-01 19:50:01,339
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56784073,1348961864297.f01eaf712e59a0bca989ced951caf4f1.
> 2012-10-01 19:50:01,340
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56594534,1349027142822.8e67bb85f4906d579d4d278d55efce0b.
> 2012-10-01 19:50:01,340
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56342344,1348941231573.8adf268d4fcb494344745c14b090e773.
> 2012-10-01 19:50:01,340
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56491525,1349027928183.7bbfb4d39ef4332e17845001191a6ad4.
> 2012-10-01 19:50:01,341
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,07123624,1348959804638.c114ec80c6693a284741e220da028736.
> 2012-10-01 19:50:01,342
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56538694,1348941049708.b9f56cf1f6f6c7b0cdf2a07a3d36846b.
> 2012-10-01 19:50:01,342
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56546534,1348941049708.bde2614732f938db04fdd81ed6dbfcf2.
> 2012-10-01 19:50:01,343
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,569054,1348962040261.a7942d7837cd57b68d156d2ce7e3bd5f.
> 2012-10-01 19:50:01,343
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56576714,1348982931576.3dd5bf244fb116cf2b6f812fcc39ad2d.
> 2012-10-01 19:50:01,344
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,5689007,1348963034009.c4b16ea4d8dbc66c301e67d8e58a7e48.
> 2012-10-01 19:50:01,344
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56410784,1349027912141.6de7be1745c329cf9680ad15e9bde594.
> 2012-10-01 19:50:01,345
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56332944,1349028887914.506f6865d3167d722fec947a59761822.
> 2012-10-01 19:50:01,345
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96457954,1348964300132.674a03f0c9866968aabd70ab38a482c0.
> 2012-10-01 19:50:01,346
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56483084,1349027988535.de732d7e63ea53331b80255f51fc1a86.
> 2012-10-01 19:50:01,347
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56790484,1348941269761.5bcc58c48351de449cc17307ab4bf777.
> 2012-10-01 19:50:01,348
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56458293,1348982903997.4f67e6f4949a2ef7f4903f78f54c474e.
> 2012-10-01 19:50:01,348
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,95123235,1349026585563.a359eb4cb88d34a529804e50a5affa24.
> 2012-10-01 19:50:01,349
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56597943,1349029123008.bf64051a387fc2970252a1c8919dfd88.
> 2012-10-01 19:50:01,350
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56368484,1348941099873.cef2729093a0d7d72b71fac1b25c0a40.
> 2012-10-01 19:50:01,350
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,17499894,1349026916228.630196a553f73069b9e568e6912ef0c5.
> 2012-10-01 19:50:01,351
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56375315,1348940876960.40cf6dfa370ce7f1fc6c1a59ba2f2191.
> 2012-10-01 19:50:01,351
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,95512574,1349009451986.e4d292eb66d16c21ef8ae32254334850.
> 2012-10-01 19:50:01,352
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56717254,1349029064698.31a1dcefef4d5e3133b323cdaac918d7.
> 2012-10-01 19:50:01,352
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56532464,1348941298909.31c8b60bb6ad6840de937a28e3482101.
> 2012-10-01 19:50:01,353
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56432705,1348941150045.07aa626f3703c7b4deaba1263c71894d.
> 2012-10-01 19:50:01,353
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,13118725,1349026772953.c0be859d4a4dc2246d764a8aad58fe88.
> 2012-10-01 19:50:01,354
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56520814,1348981132325.c2f16fd16f83aa51769abedfe8968bb6.
> 2012-10-01 19:50:01,354
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,95507615,1349009451986.c08d16db6188bd8cec100eeb1291d5b9.
> 2012-10-01 19:50:01,355
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56884434,1348963034009.616835869c81659a27eab896f48ae4e1.
> 2012-10-01 19:50:01,355
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56476541,1349028080664.341392a325646f24a3d8b8cd27ebda19.
> 2012-10-01 19:50:01,357
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56803462,1348941834500.6313b36f1949381d01df977a182e6140.
> 2012-10-01 19:50:01,357
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96464524,1348964300132.7a15f1e8e28f713212c516777267c2bf.
> 2012-10-01 19:50:01,358
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56875074,1348969971950.3e408e7cb32c9213d184e10bf42837ad.
> 2012-10-01 19:50:01,359
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,42862354,1348981565262.7ad46818060be413140cdcc11312119d.
> 2012-10-01 19:50:01,359
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56582264,1349028973106.b481b61be387a041a3f259069d5013a6.
> 2012-10-01 19:50:01,360
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56579105,1348982931576.1561a22c16263dccb8be07c654b43f2f.
> 2012-10-01 19:50:01,360
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56723415,1348946404223.38d992d687ad8925810be4220a732b13.
> 2012-10-01 19:50:01,361
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,4285921,1348981565262.7a2cbd8452b9e406eaf1a5ebff64855a.
> 2012-10-01 19:50:01,362
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56336394,1348941231573.ca52393a2eabae00a64f65c0b657b95a.
> 2012-10-01 19:50:01,363
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,96452715,1348942353563.876edfc6e978879aac42bfc905a09c26.
> 2012-10-01 19:50:01,363
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56355104,1348941138879.326405233b2b444691860b14ef587f78.
> 2012-10-01 19:50:01,364
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56525625,1348941298909.ccf16ed8e761765d2989343c7670e94f.
> 2012-10-01 19:50:01,365
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,97578484,1348938848996.98ecacc61ae4c5b3f7a3de64bec0e026.
> 2012-10-01 19:50:01,365
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56779025,1348961864297.cc13f0a6f5e632508f2e28a174ef1488.
> 2012-10-01 19:50:01,366
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56728994,1348946404223.99012cf45da4109e6b570e8b0178852c.
> 2012-10-01 19:50:01,366
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_user_events,43323443,1348591057882.8b0ab02c33f275114d89088345f58885.
> 2012-10-01 19:50:01,367
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56819704,1348962077056.621ebefbdb194a82d6314ff0f58b67b1.
> 2012-10-01 19:50:01,367
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,56686234,1348942631318.69270cd5013f8ca984424e508878e428.
> 2012-10-01 19:50:01,368
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,98642625,1348980566403.2277d2ef1d53d40d41cd23846619a3f8.
> 2012-10-01 19:50:01,524 [IPC Server handler 57 on 60020] INFO
> org.apache.hadoop.hdfs.DFSClient: Could not obtain block
> blk_3201413024070455305_51616611 from any node: java.io.IOException:
> No live nodes contain current block. Will get new block locations from
> namenode and retry...
> 2012-10-01 19:50:02,462 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 2
> regions to close
> 2012-10-01 19:50:02,462 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:50:02,462 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:50:02,495 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:02,496 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 3 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
> retry...
> 2012-10-01 19:50:02,686 [IPC Server handler 46 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@504b62c6,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320404"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.172:53925: output error
> 2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:02,688 [IPC Server handler 46 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
> exiting
> 2012-10-01 19:50:02,809 [IPC Server handler 55 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@45f1c31e,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322424"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.178:35016: output error
> 2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:02,810 [IPC Server handler 55 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
> exiting
> 2012-10-01 19:50:03,496 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:50:03,496 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:50:03,510 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:03,510 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 4 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
> retry...
> 2012-10-01 19:50:05,299 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:50:05,299 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:50:05,314 [IPC Server handler 3 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@472aa9fe,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321694"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.176:42371: output error
> 2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:05,315 [IPC Server handler 3 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
> exiting
> 2012-10-01 19:50:05,329 [IPC Server handler 16 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@42987a12,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00320293"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.135:35132: output error
> 2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:05,331 [IPC Server handler 16 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
> exiting
> 2012-10-01 19:50:05,638 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:05,638 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 5 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010. Will
> retry...
> 2012-10-01 19:50:05,641 [IPC Server handler 26 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@a9c09e8,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319505"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.183:60078: output error
> 2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:05,643 [IPC Server handler 26 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
> exiting
> 2012-10-01 19:50:05,664 [IPC Server handler 57 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@349d7b4,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319915"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.141:58290: output error
> 2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:05,666 [IPC Server handler 57 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
> exiting
> 2012-10-01 19:50:07,063 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 bad datanode[0] 10.100.101.156:50010
> 2012-10-01 19:50:07,063 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 in pipeline 10.100.101.156:50010,
> 10.100.102.88:50010, 10.100.102.122:50010: bad datanode
> 10.100.101.156:50010
> 2012-10-01 19:50:07,076 [IPC Server handler 23 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@5ba03734,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319654"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.161:43227: output error
> 2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:07,077 [IPC Server handler 23 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
> exiting
> 2012-10-01 19:50:07,089 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
> primary datanode 10.100.102.122:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:07,090 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.122:50010 failed 6 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010, 10.100.102.122:50010.
> Marking primary datanode as bad.
> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@3d19e607,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319564"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.82:42779: output error
> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:07,173 [IPC Server handler 85 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
> exiting
> 2012-10-01 19:50:07,181
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-2]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,00321084,1349118541283.a9906c96a91bb8d7e62a7a528bf0ea5c.
> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@5920511b,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00322014"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.88:49489: output error
> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:07,693 [IPC Server handler 79 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
> exiting
> 2012-10-01 19:50:08,064 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Waiting on 1
> regions to close
> 2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
> org.apache.hadoop.hbase.regionserver.Leases:
> regionserver60020.leaseChecker closing leases
> 2012-10-01 19:50:08,159 [regionserver60020.leaseChecker] INFO
> org.apache.hadoop.hbase.regionserver.Leases:
> regionserver60020.leaseChecker closed leases
> 2012-10-01 19:50:08,508 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:08,508 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 1 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:09,652 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:09,653 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 2 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:10,697 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:10,697 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 3 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:12,278 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:12,279 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 4 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:13,294 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:13,294 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 5 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:14,306 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
> primary datanode 10.100.101.156:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:14,306 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.101.156:50010 failed 6 times.  Pipeline was
> 10.100.101.156:50010, 10.100.102.88:50010. Marking primary datanode as
> bad.
> 2012-10-01 19:50:15,317 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:15,318 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 1 times.  Pipeline was
> 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:16,375 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #1 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:16,376 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 2 times.  Pipeline was
> 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:17,385 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #2 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:17,385 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 3 times.  Pipeline was
> 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:18,395 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #3 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:18,395 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 4 times.  Pipeline was
> 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:19,404 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #4 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:19,405 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 5 times.  Pipeline was
> 10.100.102.88:50010. Will retry...
> 2012-10-01 19:50:20,414 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #5 from
> primary datanode 10.100.102.88:50010
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> blk_5535637699691880681_51616301 is already commited, storedBlock ==
> null.
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.nextGenerationStampForBlock(FSNamesystem.java:5348)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.nextGenerationStamp(NameNode.java:705)
>    at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy4.nextGenerationStamp(Unknown Source)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.syncBlock(DataNode.java:1886)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1854)
>    at org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 
>    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy14.recoverBlock(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2793)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,415 [DataStreamer for file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> block blk_5535637699691880681_51616301] WARN
> org.apache.hadoop.hdfs.DFSClient: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
> 2012-10-01 19:50:20,415 [IPC Server handler 58 on 60020] ERROR
> org.apache.hadoop.hdfs.DFSClient: Exception closing file
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1341428844272.1349118568164
> : java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,415 [IPC Server handler 69 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,415 [regionserver60020.logSyncer] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] INFO
> org.apache.hadoop.fs.FileSystem: Could not cancel cleanup thread,
> though no FileSystems are open
> 2012-10-01 19:50:20,417 [regionserver60020.logSyncer] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,417 [regionserver60020.logSyncer] FATAL
> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
> Requesting close of hlog
> java.io.IOException: Reflection
>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>    at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.reflect.InvocationTargetException
>    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>    ... 4 more
> Caused by: java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,418 [regionserver60020.logSyncer] ERROR
> org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
> requesting close of hlog
> java.io.IOException: Reflection
>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>    at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.reflect.InvocationTargetException
>    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>    ... 4 more
> Caused by: java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,417 [IPC Server handler 69 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
> Requesting close of hlog
> java.io.IOException: Reflection
>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog.append(HLog.java:1033)
>    at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:1852)
>    at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1723)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3076)
>    at sun.reflect.GeneratedMethodAccessor96.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.lang.reflect.InvocationTargetException
>    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>    ... 11 more
> Caused by: java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:20,417 [IPC Server handler 29 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    ... 9 more
> Caused by: java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    ... 21 more
> 2012-10-01 19:50:20,417 [IPC Server handler 24 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    ... 9 more
> Caused by: java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    ... 21 more
> 2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,417 [IPC Server handler 1 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    ... 9 more
> Caused by: java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    ... 21 more
> 2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,417 [IPC Server handler 25 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    ... 9 more
> Caused by: java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    ... 21 more
> 2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,417 [IPC Server handler 90 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    ... 9 more
> Caused by: java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    ... 21 more
> 2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,417 [IPC Server handler 58 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    ... 9 more
> Caused by: java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    ... 21 more
> 2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,417 [IPC Server handler 17 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region
> server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272: File
> System not available
> java.io.IOException: File system is not available
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:146)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1122)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: java.lang.InterruptedException
>    at org.apache.hadoop.ipc.Client.call(Client.java:1086)
>    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>    at $Proxy7.getFileInfo(Unknown Source)
>    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:697)
>    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:542)
>    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:748)
>    at org.apache.hadoop.hbase.util.FSUtils.checkFileSystemAvailable(FSUtils.java:135)
>    ... 9 more
> Caused by: java.lang.InterruptedException
>    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
>    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
>    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
>    at org.apache.hadoop.ipc.Client.call(Client.java:1080)
>    ... 21 more
> 2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,423 [IPC Server handler 58 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,423 [IPC Server handler 17 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,422 [IPC Server handler 90 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,421 [IPC Server handler 25 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,421 [IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1576,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,420 [IPC Server handler 69 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: (responseTooSlow):
> {"processingtimems":22039,"call":"multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb),
> rpc version=1, client version=29,
> methodsFingerPrint=54742778","client":"10.100.102.155:39852","starttimems":1349120998380,"queuetimems":0,"class":"HRegionServer","responsesize":0,"method":"multi"}
> 2012-10-01 19:50:20,420 [IPC Server handler 24 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1575,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,420
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING
> region server data3024.ngpipes.milp.ngmoco.com,60020,1341428844272:
> Unrecoverable exception while closing region
> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
> still finishing close
> java.io.IOException: Filesystem closed
>    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
>    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
>    at java.io.FilterInputStream.close(FilterInputStream.java:155)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
>    at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
>    at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
>    at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
>    at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
>    at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
>    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
>    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>    at java.lang.Thread.run(Thread.java:662)
> 2012-10-01 19:50:20,426
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> FATAL org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,419 [IPC Server handler 29 on 60020] FATAL
> org.apache.hadoop.hbase.regionserver.HRegionServer: RegionServer
> abort: loaded coprocessors are: []
> 2012-10-01 19:50:20,426
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of
> metrics: requestsPerSecond=0, numberOfOnlineRegions=136,
> numberOfStores=136, numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,426 [IPC Server handler 29 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Dump of metrics:
> requestsPerSecond=0, numberOfOnlineRegions=136, numberOfStores=136,
> numberOfStorefiles=189, storefileIndexSizeMB=15,
> rootIndexSizeKB=16019, totalStaticIndexSizeKB=16194,
> totalStaticBloomSizeKB=0, memstoreSizeMB=113,
> readRequestsCount=6744201, writeRequestsCount=904280,
> compactionQueueSize=0, flushQueueSize=0, usedHeapMB=1577,
> maxHeapMB=3987, blockCacheSizeMB=781.51, blockCacheFreeMB=215.3,
> blockCacheCount=5435, blockCacheHitCount=321294212,
> blockCacheMissCount=4657926, blockCacheEvictedCount=1864312,
> blockCacheHitRatio=98%, blockCacheHitCachingRatio=99%,
> hdfsBlocksLocalityIndex=97
> 2012-10-01 19:50:20,445 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>    at $Proxy8.reportRSFatalError(Unknown Source)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedChannelException
>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>    ... 11 more
> Caused by: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.FilterInputStream.read(FilterInputStream.java:116)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> 2012-10-01 19:50:20,446 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>    at $Proxy8.reportRSFatalError(Unknown Source)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedByInterruptException
>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>    ... 11 more
> Caused by: java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>    at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
>    ... 12 more
> 2012-10-01 19:50:20,447 [IPC Server handler 29 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,446 [IPC Server handler 58 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,446 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>    at $Proxy8.reportRSFatalError(Unknown Source)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
>    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
>    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:408)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:328)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:362)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1045)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:897)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>    ... 11 more
> 2012-10-01 19:50:20,448 [IPC Server handler 17 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,445 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>    at $Proxy8.reportRSFatalError(Unknown Source)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedChannelException
>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>    ... 11 more
> Caused by: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.FilterInputStream.read(FilterInputStream.java:116)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> 2012-10-01 19:50:20,448 [IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,445
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> WARN org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to
> report fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>    at $Proxy8.reportRSFatalError(Unknown Source)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:131)
>    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>    at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedByInterruptException
>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>    ... 7 more
> Caused by: java.nio.channels.ClosedByInterruptException
>    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
>    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>    at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:545)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:898)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>    at $Proxy8.reportRSFatalError(Unknown Source)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> 2012-10-01 19:50:20,450
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> INFO org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED:
> Unrecoverable exception while closing region
> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.,
> still finishing close
> 2012-10-01 19:50:20,445 [IPC Server handler 69 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> multi(org.apache.hadoop.hbase.client.MultiAction@207c46fb), rpc
> version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.155:39852: output error
> 2012-10-01 19:50:20,445 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>    at $Proxy8.reportRSFatalError(Unknown Source)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedChannelException
>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>    ... 11 more
> Caused by: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.FilterInputStream.read(FilterInputStream.java:116)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> 2012-10-01 19:50:20,451 [IPC Server handler 24 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,445 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>    at $Proxy8.reportRSFatalError(Unknown Source)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedChannelException
>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>    ... 11 more
> Caused by: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.FilterInputStream.read(FilterInputStream.java:116)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> 2012-10-01 19:50:20,451 [IPC Server handler 90 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,445 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Unable to report
> fatal error to master
> java.lang.reflect.UndeclaredThrowableException
>    at $Proxy8.reportRSFatalError(Unknown Source)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:1564)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.checkFileSystem(HRegionServer.java:1124)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1068)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.cleanup(HRegionServer.java:1043)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:1787)
>    at sun.reflect.GeneratedMethodAccessor95.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326)
> Caused by: java.io.IOException: Call to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:60000 failed on
> local exception: java.nio.channels.ClosedChannelException
>    at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:953)
>    at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:922)
>    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>    ... 11 more
> Caused by: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:120)
>    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:163)
>    at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
>    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>    at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>    at java.io.FilterInputStream.read(FilterInputStream.java:116)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:311)
>    at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>    at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>    at java.io.DataInputStream.readInt(DataInputStream.java:370)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:571)
>    at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> 2012-10-01 19:50:20,452 [IPC Server handler 25 on 60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: File
> System not available
> 2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@5d72e577,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00321312"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.184:34111: output error
> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@2237178f,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316983"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.188:59581: output error
> 2012-10-01 19:50:20,450 [IPC Server handler 69 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelIO(HBaseServer.java:1710)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:20,452 [IPC Server handler 58 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
> exiting
> 2012-10-01 19:50:20,450
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-1]
> ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable
> while processing event M_RS_CLOSE_REGION
> java.lang.RuntimeException: java.io.IOException: Filesystem closed
>    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:133)
>    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>    at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Filesystem closed
>    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:232)
>    at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:70)
>    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1859)
>    at java.io.FilterInputStream.close(FilterInputStream.java:155)
>    at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.close(HFileReaderV2.java:320)
>    at org.apache.hadoop.hbase.regionserver.StoreFile$Reader.close(StoreFile.java:1081)
>    at org.apache.hadoop.hbase.regionserver.StoreFile.closeReader(StoreFile.java:568)
>    at org.apache.hadoop.hbase.regionserver.Store.close(Store.java:473)
>    at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:789)
>    at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:724)
>    at org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:119)
>    ... 4 more
> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@573dba6d,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"0032027"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.183:60076: output error
> 2012-10-01 19:50:20,452 [IPC Server handler 69 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
> exiting
> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@4eebbed5,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00317054"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.146:40240: output error
> 2012-10-01 19:50:20,452 [IPC Server handler 29 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:20,453 [IPC Server handler 29 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
> exiting
> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:20,453 [IPC Server handler 17 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
> exiting
> 2012-10-01 19:50:20,453 [IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
> exiting
> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@4ff0ed4a,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00318964"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.172:53924: output error
> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:20,454 [IPC Server handler 24 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
> exiting
> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@526abe46,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00316914"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.101.184:34110: output error
> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:20,455 [IPC Server handler 90 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
> exiting
> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder, call
> get([B@5df20fef,
> {"timeRange":[0,9223372036854775807],"totalColumns":1,"cacheBlocks":true,"families":{"U":["ALL"]},"maxVersions":1,"row":"00319173"}),
> rpc version=1, client version=29, methodsFingerPrint=54742778 from
> 10.100.102.146:40243: output error
> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] WARN
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020
> caught: java.nio.channels.ClosedChannelException
>    at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>    at org.apache.hadoop.hbase.ipc.HBaseServer.channelWrite(HBaseServer.java:1653)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.processResponse(HBaseServer.java:924)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Responder.doRespond(HBaseServer.java:1003)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Call.sendResponseIfReady(HBaseServer.java:409)
>    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1346)
> 
> 2012-10-01 19:50:20,456 [IPC Server handler 25 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
> exiting
> 2012-10-01 19:50:21,066
> [RS_CLOSE_REGION-data3024.ngpipes.milp.ngmoco.com,60020,1341428844272-0]
> INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed
> orwell_events,00316914,1349118541283.9740f22a42e9e8b6aca3966c0173e680.
> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] WARN
> org.apache.hadoop.hdfs.DFSClient: Error while syncing
> java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:21,418 [regionserver60020.logSyncer] FATAL
> org.apache.hadoop.hbase.regionserver.wal.HLog: Could not sync.
> Requesting close of hlog
> java.io.IOException: Reflection
>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>    at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.reflect.InvocationTargetException
>    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>    ... 4 more
> Caused by: java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:21,419 [regionserver60020.logSyncer] ERROR
> org.apache.hadoop.hbase.regionserver.wal.HLog: Error while syncing,
> requesting close of hlog
> java.io.IOException: Reflection
>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:230)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog.syncer(HLog.java:1098)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog.sync(HLog.java:1202)
>    at org.apache.hadoop.hbase.regionserver.wal.HLog$LogSyncer.run(HLog.java:1060)
>    at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.reflect.InvocationTargetException
>    at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogWriter.sync(SequenceFileLogWriter.java:228)
>    ... 4 more
> Caused by: java.io.IOException: Error Recovery for block
> blk_5535637699691880681_51616301 failed  because recovery from primary
> datanode 10.100.102.88:50010 failed 6 times.  Pipeline was
> 10.100.102.88:50010. Aborting...
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2833)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2305)
>    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2477)
> 2012-10-01 19:50:22,066 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; all regions
> closed.
> 2012-10-01 19:50:22,066 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closing
> leases
> 2012-10-01 19:50:22,066 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.Leases: regionserver60020 closed
> leases
> 2012-10-01 19:50:22,082 [regionserver60020] WARN
> org.apache.hadoop.hbase.regionserver.HRegionServer: Failed deleting my
> ephemeral node
> org.apache.zookeeper.KeeperException$SessionExpiredException:
> KeeperErrorCode = Session expired for
> /hbase/rs/data3024.ngpipes.milp.ngmoco.com,60020,1341428844272
>    at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
>    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>    at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:868)
>    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.delete(RecoverableZooKeeper.java:107)
>    at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:962)
>    at org.apache.hadoop.hbase.zookeeper.ZKUtil.deleteNode(ZKUtil.java:951)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.deleteMyEphemeralNode(HRegionServer.java:964)
>    at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:762)
>    at java.lang.Thread.run(Thread.java:662)
> 2012-10-01 19:50:22,082 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: stopping server
> data3024.ngpipes.milp.ngmoco.com,60020,1341428844272; zookeeper
> connection closed.
> 2012-10-01 19:50:22,082 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: regionserver60020
> exiting
> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
> starting; hbase.shutdown.hook=true;
> fsShutdownHook=Thread[Thread-5,5,main]
> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: STOPPED: Shutdown
> hook
> 2012-10-01 19:50:22,123 [Shutdownhook:regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Starting fs
> shutdown hook thread.
> 2012-10-01 19:50:22,124 [Shutdownhook:regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Shutdown hook
> finished.
> Mon Oct  1 19:54:10 UTC 2012 Starting regionserver on
> data3024.ngpipes.milp.ngmoco.com
> core file size          (blocks, -c) 0
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 20
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 16382
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 32768
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) 8192
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) unlimited
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 2012-10-01 19:54:11,355 [main] INFO
> org.apache.hadoop.hbase.util.VersionInfo: HBase 0.92.1
> 2012-10-01 19:54:11,356 [main] INFO
> org.apache.hadoop.hbase.util.VersionInfo: Subversion
> https://svn.apache.org/repos/asf/hbase/branches/0.92 -r 1298924
> 2012-10-01 19:54:11,356 [main] INFO
> org.apache.hadoop.hbase.util.VersionInfo: Compiled by jenkins on Fri
> Mar  9 16:58:34 UTC 2012
> 2012-10-01 19:54:11,513 [main] INFO
> org.apache.hadoop.hbase.util.ServerCommandLine: vmName=Java
> HotSpot(TM) 64-Bit Server VM, vmVendor=Sun Microsystems Inc.,
> vmVersion=20.1-b02
> 2012-10-01 19:54:11,513 [main] INFO
> org.apache.hadoop.hbase.util.ServerCommandLine:
> vmInputArguments=[-XX:OnOutOfMemoryError=kill, -9, %p, -Xmx4000m,
> -XX:NewSize=128m, -XX:MaxNewSize=128m,
> -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,
> -XX:CMSInitiatingOccupancyFraction=75, -verbose:gc,
> -XX:+PrintGCDetails, -XX:+PrintGCTimeStamps,
> -Xloggc:/data2/hbase_log/gc-hbase.log,
> -Dcom.sun.management.jmxremote.authenticate=true,
> -Dcom.sun.management.jmxremote.ssl=false,
> -Dcom.sun.management.jmxremote.password.file=/home/hadoop/hadoop/conf/jmxremote.password,
> -Dcom.sun.management.jmxremote.port=8010,
> -Dhbase.log.dir=/data2/hbase_log,
> -Dhbase.log.file=hbase-hadoop-regionserver-data3024.ngpipes.milp.ngmoco.com.log,
> -Dhbase.home.dir=/home/hadoop/hbase, -Dhbase.id.str=hadoop,
> -Dhbase.root.logger=INFO,DRFA,
> -Djava.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64]
> 2012-10-01 19:54:11,964 [IPC Reader 0 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,967 [IPC Reader 1 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,970 [IPC Reader 2 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,973 [IPC Reader 3 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,976 [IPC Reader 4 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,979 [IPC Reader 5 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,982 [IPC Reader 6 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,985 [IPC Reader 7 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,988 [IPC Reader 8 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:11,991 [IPC Reader 9 on port 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: Starting Thread-1
> 2012-10-01 19:54:12,002 [main] INFO
> org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics
> with hostName=HRegionServer, port=60020
> 2012-10-01 19:54:12,081 [main] INFO
> org.apache.hadoop.hbase.io.hfile.CacheConfig: Allocating LruBlockCache
> with maximum size 996.8m
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48
> GMT
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:host.name=data3024.ngpipes.milp.ngmoco.com
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:java.version=1.6.0_26
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun
> Microsystems Inc.
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:java.home=/usr/lib/jvm/java-6-sun-1.6.0.26/jre
> 2012-10-01 19:54:12,221 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:java.class.path=/home/hadoop/hbase/conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop/hbase:/home/hadoop/hbase/hbase-0.92.1.jar:/home/hadoop/hbase/hbase-0.92.1-tests.jar:/home/hadoop/hbase/lib/activation-1.1.jar:/home/hadoop/hbase/lib/asm-3.1.jar:/home/hadoop/hbase/lib/avro-1.5.3.jar:/home/hadoop/hbase/lib/avro-ipc-1.5.3.jar:/home/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hbase/lib/commons-cli-1.2.jar:/home/hadoop/hbase/lib/commons-codec-1.4.jar:/home/hadoop/hbase/lib/commons-collections-3.2.1.jar:/home/hadoop/hbase/lib/commons-configuration-1.6.jar:/home/hadoop/hbase/lib/commons-digester-1.8.jar:/home/hadoop/hbase/lib/commons-el-1.0.jar:/home/hadoop/hbase/lib/commons-httpclient-3.1.jar:/home/hadoop/hbase/lib/commons-lang-2.5.jar:/home/hadoop/hbase/lib/commons-logging-1.1.1.jar:/home/hadoop/hbase/lib/commons-math-2.1.jar:/home/hadoop/hbase/lib/commons-net-1.4.1.jar:/home/hadoop/hbase/lib/core-3.1.1.jar:/home/hadoop/hbase/lib/guava-r09.jar:/home/hadoop/hbase/lib/hadoop-core-0.20.2-cdh3u2.jar:/home/hadoop/hbase/lib/hadoop-lzo-0.4.9.jar:/home/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/home/hadoop/hbase/lib/httpclient-4.0.1.jar:/home/hadoop/hbase/lib/httpcore-4.0.1.jar:/home/hadoop/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/home/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase/lib/jaxb-api-2.1.jar:/home/hadoop/hbase/lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase/lib/jersey-core-1.4.jar:/home/hadoop/hbase/lib/jersey-json-1.4.jar:/home/hadoop/hbase/lib/jersey-server-1.4.jar:/home/hadoop/hbase/lib/jettison-1.1.jar:/home/hadoop/hbase/lib/jetty-6.1.26.jar:/home/hadoop/hbase/lib/jetty-util-6.1.26.jar:/home/hadoop/hbase/lib/jruby-complete-1.6.5.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/libthrift-0.7.0.jar:/home/hadoop/hbase/lib/log4j-1.2.16.jar:/home/hadoop/hbase/lib/netty-3.2.4.Final.jar:/home/hadoop/hbase/lib/protobuf-java-2.4.0a.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/servlet-api-2.5.jar:/home/hadoop/hbase/lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/home/hadoop/hbase/lib/stax-api-1.0.1.jar:/home/hadoop/hbase/lib/velocity-1.7.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/zookeeper-3.4.3.jar:
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:java.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:os.version=2.6.35-30-generic
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client environment:user.name=hadoop
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:user.home=/home/hadoop/
> 2012-10-01 19:54:12,222 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Client
> environment:user.dir=/home/gregross
> 2012-10-01 19:54:12,225 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Initiating client connection,
> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
> sessionTimeout=180000 watcher=regionserver:60020
> 2012-10-01 19:54:12,251 [regionserver60020-SendThread()] INFO
> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
> /10.100.102.197:2181
> 2012-10-01 19:54:12,252 [regionserver60020] INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
> of this process is 15403@data3024.ngpipes.milp.ngmoco.com
> 2012-10-01 19:54:12,259
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section
> 'Client' could not be found. If you are not using SASL, you may ignore
> this. On the other hand, if you expected SASL to work, please fix your
> JAAS configuration.
> 2012-10-01 19:54:12,260
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
> session
> 2012-10-01 19:54:12,272
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
> server; r-o mode will be unavailable
> 2012-10-01 19:54:12,273
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
> sessionid = 0x137ec64373dd4b5, negotiated timeout = 40000
> 2012-10-01 19:54:12,289 [main] INFO
> org.apache.hadoop.hbase.regionserver.ShutdownHook: Installed shutdown
> hook thread: Shutdownhook:regionserver60020
> 2012-10-01 19:54:12,352 [regionserver60020] INFO
> org.apache.zookeeper.ZooKeeper: Initiating client connection,
> connectString=namenode-sn301.ngpipes.milp.ngmoco.com:2181
> sessionTimeout=180000 watcher=hconnection
> 2012-10-01 19:54:12,353 [regionserver60020-SendThread()] INFO
> org.apache.zookeeper.ClientCnxn: Opening socket connection to server
> /10.100.102.197:2181
> 2012-10-01 19:54:12,353 [regionserver60020] INFO
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: The identifier
> of this process is 15403@data3024.ngpipes.milp.ngmoco.com
> 2012-10-01 19:54:12,354
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not
> SASL-authenticate because the default JAAS configuration section
> 'Client' could not be found. If you are not using SASL, you may ignore
> this. On the other hand, if you expected SASL to work, please fix your
> JAAS configuration.
> 2012-10-01 19:54:12,354
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Socket connection established to
> namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181, initiating
> session
> 2012-10-01 19:54:12,361
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> WARN org.apache.zookeeper.ClientCnxnSocket: Connected to an old
> server; r-o mode will be unavailable
> 2012-10-01 19:54:12,361
> [regionserver60020-SendThread(namenode-sn301.ngpipes.milp.ngmoco.com:2181)]
> INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
> on server namenode-sn301.ngpipes.milp.ngmoco.com/10.100.102.197:2181,
> sessionid = 0x137ec64373dd4b6, negotiated timeout = 40000
> 2012-10-01 19:54:12,384 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher:
> globalMemStoreLimit=1.6g, globalMemStoreLimitLowMark=1.4g,
> maxHeap=3.9g
> 2012-10-01 19:54:12,400 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Runs every 2hrs,
> 46mins, 40sec
> 2012-10-01 19:54:12,420 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Attempting connect
> to Master server at
> namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915
> 2012-10-01 19:54:12,453 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Connected to
> master at data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020
> 2012-10-01 19:54:12,453 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Telling master at
> namenode-sn301.ngpipes.milp.ngmoco.com,60000,1348698078915 that we are
> up with port=60020, startcode=1349121252040
> 2012-10-01 19:54:12,476 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Master passed us
> hostname to use. Was=data3024.ngpipes.milp.ngmoco.com,
> Now=data3024.ngpipes.milp.ngmoco.com
> 2012-10-01 19:54:12,568 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.wal.HLog: HLog configuration:
> blocksize=64 MB, rollsize=60.8 MB, enabled=true,
> optionallogflushinternal=1000ms
> 2012-10-01 19:54:12,642 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.wal.HLog:  for
> /hbase/.logs/data3024.ngpipes.milp.ngmoco.com,60020,1349121252040/data3024.ngpipes.milp.ngmoco.com%2C60020%2C1349121252040.1349121252569
> 2012-10-01 19:54:12,643 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.wal.HLog: Using
> getNumCurrentReplicas--HDFS-826
> 2012-10-01 19:54:12,651 [regionserver60020] INFO
> org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
> with processName=RegionServer, sessionId=regionserver60020
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: revision
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: date
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: user
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: url
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: MetricsString added: version
> 2012-10-01 19:54:12,656 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: new MBeanInfo
> 2012-10-01 19:54:12,657 [regionserver60020] INFO
> org.apache.hadoop.hbase.metrics: new MBeanInfo
> 2012-10-01 19:54:12,657 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.metrics.RegionServerMetrics:
> Initialized
> 2012-10-01 19:54:12,722 [regionserver60020] INFO org.mortbay.log:
> Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2012-10-01 19:54:12,774 [regionserver60020] INFO
> org.apache.hadoop.http.HttpServer: Added global filtersafety
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2012-10-01 19:54:12,787 [regionserver60020] INFO
> org.apache.hadoop.http.HttpServer: Port returned by
> webServer.getConnectors()[0].getLocalPort() before open() is -1.
> Opening the listener on 60030
> 2012-10-01 19:54:12,787 [regionserver60020] INFO
> org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned
> 60030 webServer.getConnectors()[0].getLocalPort() returned 60030
> 2012-10-01 19:54:12,787 [regionserver60020] INFO
> org.apache.hadoop.http.HttpServer: Jetty bound to port 60030
> 2012-10-01 19:54:12,787 [regionserver60020] INFO org.mortbay.log: jetty-6.1.26
> 2012-10-01 19:54:13,079 [regionserver60020] INFO org.mortbay.log:
> Started SelectChannelConnector@0.0.0.0:60030
> 2012-10-01 19:54:13,079 [IPC Server Responder] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting
> 2012-10-01 19:54:13,079 [IPC Server listener on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60020:
> starting
> 2012-10-01 19:54:13,094 [IPC Server handler 0 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60020:
> starting
> 2012-10-01 19:54:13,094 [IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 2 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 3 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 4 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 5 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 6 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 7 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60020:
> starting
> 2012-10-01 19:54:13,095 [IPC Server handler 8 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 9 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 10 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 10 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 11 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 11 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 12 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 12 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 13 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 13 on 60020:
> starting
> 2012-10-01 19:54:13,096 [IPC Server handler 14 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 14 on 60020:
> starting
> 2012-10-01 19:54:13,097 [IPC Server handler 15 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 15 on 60020:
> starting
> 2012-10-01 19:54:13,097 [IPC Server handler 16 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 16 on 60020:
> starting
> 2012-10-01 19:54:13,097 [IPC Server handler 17 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 17 on 60020:
> starting
> 2012-10-01 19:54:13,097 [IPC Server handler 18 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 18 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 19 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 19 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 20 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 20 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 21 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 21 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 22 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 22 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 23 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 23 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 24 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 24 on 60020:
> starting
> 2012-10-01 19:54:13,098 [IPC Server handler 25 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 25 on 60020:
> starting
> 2012-10-01 19:54:13,099 [IPC Server handler 26 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 26 on 60020:
> starting
> 2012-10-01 19:54:13,099 [IPC Server handler 27 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 27 on 60020:
> starting
> 2012-10-01 19:54:13,099 [IPC Server handler 28 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 28 on 60020:
> starting
> 2012-10-01 19:54:13,100 [IPC Server handler 29 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 29 on 60020:
> starting
> 2012-10-01 19:54:13,101 [IPC Server handler 30 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 30 on 60020:
> starting
> 2012-10-01 19:54:13,101 [IPC Server handler 31 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 31 on 60020:
> starting
> 2012-10-01 19:54:13,101 [IPC Server handler 32 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 32 on 60020:
> starting
> 2012-10-01 19:54:13,101 [IPC Server handler 33 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 33 on 60020:
> starting
> 2012-10-01 19:54:13,101 [IPC Server handler 34 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 34 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 35 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 35 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 36 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 36 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 37 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 37 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 38 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 38 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 39 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 39 on 60020:
> starting
> 2012-10-01 19:54:13,102 [IPC Server handler 40 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 40 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 41 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 41 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 42 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 42 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 43 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 43 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 44 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 44 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 45 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 45 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 46 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 46 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 47 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 47 on 60020:
> starting
> 2012-10-01 19:54:13,103 [IPC Server handler 48 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 48 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 49 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 49 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 50 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 50 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 51 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 51 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 52 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 52 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 53 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 53 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 54 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 54 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 55 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 55 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 56 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 56 on 60020:
> starting
> 2012-10-01 19:54:13,104 [IPC Server handler 57 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 57 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 58 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 58 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 59 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 59 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 60 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 60 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 61 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 61 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 62 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 62 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 63 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 63 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 64 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 64 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 65 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 65 on 60020:
> starting
> 2012-10-01 19:54:13,105 [IPC Server handler 66 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 66 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 67 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 67 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 68 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 68 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 69 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 69 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 70 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 70 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 71 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 71 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 72 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 72 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 73 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 73 on 60020:
> starting
> 2012-10-01 19:54:13,106 [IPC Server handler 74 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 74 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 75 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 75 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 76 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 76 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 77 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 77 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 78 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 78 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 79 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 79 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 80 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 80 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 81 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 81 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 82 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 82 on 60020:
> starting
> 2012-10-01 19:54:13,107 [IPC Server handler 83 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 83 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 84 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 84 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 85 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 85 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 86 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 86 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 87 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 87 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 88 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 88 on 60020:
> starting
> 2012-10-01 19:54:13,108 [IPC Server handler 89 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 89 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 90 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 90 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 91 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 91 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 92 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 92 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 93 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 93 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 94 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 94 on 60020:
> starting
> 2012-10-01 19:54:13,109 [IPC Server handler 95 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 95 on 60020:
> starting
> 2012-10-01 19:54:13,110 [IPC Server handler 96 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 96 on 60020:
> starting
> 2012-10-01 19:54:13,110 [IPC Server handler 97 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 97 on 60020:
> starting
> 2012-10-01 19:54:13,110 [IPC Server handler 98 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 98 on 60020:
> starting
> 2012-10-01 19:54:13,110 [IPC Server handler 99 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: IPC Server handler 99 on 60020:
> starting
> 2012-10-01 19:54:13,110 [PRI IPC Server handler 0 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 0 on 60020:
> starting
> 2012-10-01 19:54:13,110 [PRI IPC Server handler 1 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 1 on 60020:
> starting
> 2012-10-01 19:54:13,110 [PRI IPC Server handler 2 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 2 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 3 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 3 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 4 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 4 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 5 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 5 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 6 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 6 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 7 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 7 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 8 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 8 on 60020:
> starting
> 2012-10-01 19:54:13,111 [PRI IPC Server handler 9 on 60020] INFO
> org.apache.hadoop.ipc.HBaseServer: PRI IPC Server handler 9 on 60020:
> starting
> 2012-10-01 19:54:13,124 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Serving as
> data3024.ngpipes.milp.ngmoco.com,60020,1349121252040, RPC listening on
> data3024.ngpipes.milp.ngmoco.com/10.100.101.156:60020,
> sessionid=0x137ec64373dd4b5
> 2012-10-01 19:54:13,124
> [SplitLogWorker-data3024.ngpipes.milp.ngmoco.com,60020,1349121252040]
> INFO org.apache.hadoop.hbase.regionserver.SplitLogWorker:
> SplitLogWorker data3024.ngpipes.milp.ngmoco.com,60020,1349121252040
> starting
> 2012-10-01 19:54:13,125 [regionserver60020] INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Registered
> RegionServer MXBean
> 
> GC log
> ======
> 
> 1.914: [GC 1.914: [ParNew: 99976K->7646K(118016K), 0.0087130 secs]
> 99976K->7646K(123328K), 0.0088110 secs] [Times: user=0.07 sys=0.00,
> real=0.00 secs]
> 416.341: [GC 416.341: [ParNew: 112558K->12169K(118016K), 0.0447760
> secs] 112558K->25025K(133576K), 0.0450080 secs] [Times: user=0.13
> sys=0.02, real=0.05 secs]
> 416.386: [GC [1 CMS-initial-mark: 12855K(15560K)] 25089K(133576K),
> 0.0037570 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 416.390: [CMS-concurrent-mark-start]
> 416.407: [CMS-concurrent-mark: 0.015/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 416.407: [CMS-concurrent-preclean-start]
> 416.408: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 416.408: [GC[YG occupancy: 12233 K (118016 K)]416.408: [Rescan
> (parallel) , 0.0074970 secs]416.416: [weak refs processing, 0.0000370
> secs] [1 CMS-remark: 12855K(15560K)] 25089K(133576K), 0.0076480 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 416.416: [CMS-concurrent-sweep-start]
> 416.419: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 416.419: [CMS-concurrent-reset-start]
> 416.467: [CMS-concurrent-reset: 0.049/0.049 secs] [Times: user=0.01
> sys=0.04, real=0.05 secs]
> 418.468: [GC [1 CMS-initial-mark: 12855K(21428K)] 26216K(139444K),
> 0.0037020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 418.471: [CMS-concurrent-mark-start]
> 418.487: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 418.487: [CMS-concurrent-preclean-start]
> 418.488: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 418.488: [GC[YG occupancy: 13360 K (118016 K)]418.488: [Rescan
> (parallel) , 0.0090770 secs]418.497: [weak refs processing, 0.0000170
> secs] [1 CMS-remark: 12855K(21428K)] 26216K(139444K), 0.0092220 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 418.497: [CMS-concurrent-sweep-start]
> 418.500: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 418.500: [CMS-concurrent-reset-start]
> 418.511: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 420.512: [GC [1 CMS-initial-mark: 12854K(21428K)] 26344K(139444K),
> 0.0041050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 420.516: [CMS-concurrent-mark-start]
> 420.532: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
> sys=0.01, real=0.01 secs]
> 420.532: [CMS-concurrent-preclean-start]
> 420.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 420.533: [GC[YG occupancy: 13489 K (118016 K)]420.533: [Rescan
> (parallel) , 0.0014850 secs]420.534: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 12854K(21428K)] 26344K(139444K), 0.0015920 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 420.534: [CMS-concurrent-sweep-start]
> 420.537: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 420.537: [CMS-concurrent-reset-start]
> 420.548: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 422.437: [GC [1 CMS-initial-mark: 12854K(21428K)] 28692K(139444K),
> 0.0051030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 422.443: [CMS-concurrent-mark-start]
> 422.458: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 422.458: [CMS-concurrent-preclean-start]
> 422.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 422.458: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 427.541:
> [CMS-concurrent-abortable-preclean: 0.678/5.083 secs] [Times:
> user=0.66 sys=0.00, real=5.08 secs]
> 427.541: [GC[YG occupancy: 16198 K (118016 K)]427.541: [Rescan
> (parallel) , 0.0013750 secs]427.543: [weak refs processing, 0.0000140
> secs] [1 CMS-remark: 12854K(21428K)] 29053K(139444K), 0.0014800 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 427.543: [CMS-concurrent-sweep-start]
> 427.544: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 427.544: [CMS-concurrent-reset-start]
> 427.557: [CMS-concurrent-reset: 0.013/0.013 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 429.557: [GC [1 CMS-initial-mark: 12854K(21428K)] 30590K(139444K),
> 0.0043280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 429.562: [CMS-concurrent-mark-start]
> 429.574: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 429.574: [CMS-concurrent-preclean-start]
> 429.575: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 429.575: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 434.626:
> [CMS-concurrent-abortable-preclean: 0.747/5.051 secs] [Times:
> user=0.74 sys=0.00, real=5.05 secs]
> 434.626: [GC[YG occupancy: 18154 K (118016 K)]434.626: [Rescan
> (parallel) , 0.0015440 secs]434.627: [weak refs processing, 0.0000140
> secs] [1 CMS-remark: 12854K(21428K)] 31009K(139444K), 0.0016500 secs]
> [Times: user=0.00 sys=0.00, real=0.00 secs]
> 434.628: [CMS-concurrent-sweep-start]
> 434.629: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 434.629: [CMS-concurrent-reset-start]
> 434.641: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 436.641: [GC [1 CMS-initial-mark: 12854K(21428K)] 31137K(139444K),
> 0.0043440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 436.646: [CMS-concurrent-mark-start]
> 436.660: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 436.660: [CMS-concurrent-preclean-start]
> 436.661: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 436.661: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 441.773:
> [CMS-concurrent-abortable-preclean: 0.608/5.112 secs] [Times:
> user=0.60 sys=0.00, real=5.11 secs]
> 441.773: [GC[YG occupancy: 18603 K (118016 K)]441.773: [Rescan
> (parallel) , 0.0024270 secs]441.776: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12854K(21428K)] 31458K(139444K), 0.0025200 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 441.776: [CMS-concurrent-sweep-start]
> 441.777: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 441.777: [CMS-concurrent-reset-start]
> 441.788: [CMS-concurrent-reset: 0.011/0.011 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 443.788: [GC [1 CMS-initial-mark: 12854K(21428K)] 31586K(139444K),
> 0.0044590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 443.793: [CMS-concurrent-mark-start]
> 443.804: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.04
> sys=0.00, real=0.02 secs]
> 443.804: [CMS-concurrent-preclean-start]
> 443.805: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 443.805: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 448.821:
> [CMS-concurrent-abortable-preclean: 0.813/5.016 secs] [Times:
> user=0.81 sys=0.00, real=5.01 secs]
> 448.822: [GC[YG occupancy: 19052 K (118016 K)]448.822: [Rescan
> (parallel) , 0.0013990 secs]448.823: [weak refs processing, 0.0000140
> secs] [1 CMS-remark: 12854K(21428K)] 31907K(139444K), 0.0015040 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 448.823: [CMS-concurrent-sweep-start]
> 448.825: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 448.825: [CMS-concurrent-reset-start]
> 448.837: [CMS-concurrent-reset: 0.012/0.012 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 450.837: [GC [1 CMS-initial-mark: 12854K(21428K)] 32035K(139444K),
> 0.0044510 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 450.842: [CMS-concurrent-mark-start]
> 450.857: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 450.857: [CMS-concurrent-preclean-start]
> 450.858: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 450.858: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 455.922:
> [CMS-concurrent-abortable-preclean: 0.726/5.064 secs] [Times:
> user=0.73 sys=0.00, real=5.06 secs]
> 455.922: [GC[YG occupancy: 19542 K (118016 K)]455.922: [Rescan
> (parallel) , 0.0016050 secs]455.924: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12854K(21428K)] 32397K(139444K), 0.0017340 secs]
> [Times: user=0.02 sys=0.00, real=0.01 secs]
> 455.924: [CMS-concurrent-sweep-start]
> 455.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 455.927: [CMS-concurrent-reset-start]
> 455.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 457.936: [GC [1 CMS-initial-mark: 12854K(21428K)] 32525K(139444K),
> 0.0026740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 457.939: [CMS-concurrent-mark-start]
> 457.950: [CMS-concurrent-mark: 0.011/0.011 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 457.950: [CMS-concurrent-preclean-start]
> 457.950: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 457.950: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 463.065:
> [CMS-concurrent-abortable-preclean: 0.708/5.115 secs] [Times:
> user=0.71 sys=0.00, real=5.12 secs]
> 463.066: [GC[YG occupancy: 19991 K (118016 K)]463.066: [Rescan
> (parallel) , 0.0013940 secs]463.067: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12854K(21428K)] 32846K(139444K), 0.0015000 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 463.067: [CMS-concurrent-sweep-start]
> 463.070: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 463.070: [CMS-concurrent-reset-start]
> 463.080: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 465.080: [GC [1 CMS-initial-mark: 12854K(21428K)] 32974K(139444K),
> 0.0027070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 465.083: [CMS-concurrent-mark-start]
> 465.096: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 465.096: [CMS-concurrent-preclean-start]
> 465.096: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 465.096: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 470.123:
> [CMS-concurrent-abortable-preclean: 0.723/5.027 secs] [Times:
> user=0.71 sys=0.00, real=5.03 secs]
> 470.124: [GC[YG occupancy: 20440 K (118016 K)]470.124: [Rescan
> (parallel) , 0.0011990 secs]470.125: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 12854K(21428K)] 33295K(139444K), 0.0012990 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 470.125: [CMS-concurrent-sweep-start]
> 470.127: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 470.127: [CMS-concurrent-reset-start]
> 470.137: [CMS-concurrent-reset: 0.010/0.010 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 472.137: [GC [1 CMS-initial-mark: 12854K(21428K)] 33423K(139444K),
> 0.0041330 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 472.141: [CMS-concurrent-mark-start]
> 472.155: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 472.155: [CMS-concurrent-preclean-start]
> 472.156: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 472.156: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 477.179:
> [CMS-concurrent-abortable-preclean: 0.618/5.023 secs] [Times:
> user=0.62 sys=0.00, real=5.02 secs]
> 477.179: [GC[YG occupancy: 20889 K (118016 K)]477.179: [Rescan
> (parallel) , 0.0014510 secs]477.180: [weak refs processing, 0.0000090
> secs] [1 CMS-remark: 12854K(21428K)] 33744K(139444K), 0.0015250 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 477.181: [CMS-concurrent-sweep-start]
> 477.183: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 477.183: [CMS-concurrent-reset-start]
> 477.192: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 479.192: [GC [1 CMS-initial-mark: 12854K(21428K)] 33872K(139444K),
> 0.0039730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 479.196: [CMS-concurrent-mark-start]
> 479.209: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 479.209: [CMS-concurrent-preclean-start]
> 479.210: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 479.210: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 484.295:
> [CMS-concurrent-abortable-preclean: 0.757/5.085 secs] [Times:
> user=0.77 sys=0.00, real=5.09 secs]
> 484.295: [GC[YG occupancy: 21583 K (118016 K)]484.295: [Rescan
> (parallel) , 0.0013210 secs]484.297: [weak refs processing, 0.0000150
> secs] [1 CMS-remark: 12854K(21428K)] 34438K(139444K), 0.0014200 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 484.297: [CMS-concurrent-sweep-start]
> 484.298: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 484.298: [CMS-concurrent-reset-start]
> 484.307: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 486.308: [GC [1 CMS-initial-mark: 12854K(21428K)] 34566K(139444K),
> 0.0041800 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 486.312: [CMS-concurrent-mark-start]
> 486.324: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 486.324: [CMS-concurrent-preclean-start]
> 486.324: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 486.324: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 491.394:
> [CMS-concurrent-abortable-preclean: 0.565/5.070 secs] [Times:
> user=0.56 sys=0.00, real=5.06 secs]
> 491.394: [GC[YG occupancy: 22032 K (118016 K)]491.395: [Rescan
> (parallel) , 0.0018030 secs]491.396: [weak refs processing, 0.0000090
> secs] [1 CMS-remark: 12854K(21428K)] 34887K(139444K), 0.0018830 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 491.397: [CMS-concurrent-sweep-start]
> 491.398: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 491.398: [CMS-concurrent-reset-start]
> 491.406: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 493.407: [GC [1 CMS-initial-mark: 12854K(21428K)] 35080K(139444K),
> 0.0027620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 493.410: [CMS-concurrent-mark-start]
> 493.420: [CMS-concurrent-mark: 0.010/0.010 secs] [Times: user=0.04
> sys=0.00, real=0.01 secs]
> 493.420: [CMS-concurrent-preclean-start]
> 493.420: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 493.420: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 498.525:
> [CMS-concurrent-abortable-preclean: 0.600/5.106 secs] [Times:
> user=0.61 sys=0.00, real=5.11 secs]
> 498.526: [GC[YG occupancy: 22545 K (118016 K)]498.526: [Rescan
> (parallel) , 0.0019450 secs]498.528: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12854K(21428K)] 35400K(139444K), 0.0020460 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 498.528: [CMS-concurrent-sweep-start]
> 498.530: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 498.530: [CMS-concurrent-reset-start]
> 498.538: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 500.538: [GC [1 CMS-initial-mark: 12854K(21428K)] 35529K(139444K),
> 0.0027790 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 500.541: [CMS-concurrent-mark-start]
> 500.554: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 500.554: [CMS-concurrent-preclean-start]
> 500.554: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 500.554: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 505.616:
> [CMS-concurrent-abortable-preclean: 0.557/5.062 secs] [Times:
> user=0.56 sys=0.00, real=5.06 secs]
> 505.617: [GC[YG occupancy: 22995 K (118016 K)]505.617: [Rescan
> (parallel) , 0.0023440 secs]505.619: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12854K(21428K)] 35850K(139444K), 0.0024280 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 505.619: [CMS-concurrent-sweep-start]
> 505.621: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 505.621: [CMS-concurrent-reset-start]
> 505.629: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 507.630: [GC [1 CMS-initial-mark: 12854K(21428K)] 35978K(139444K),
> 0.0027500 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 507.632: [CMS-concurrent-mark-start]
> 507.645: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 507.645: [CMS-concurrent-preclean-start]
> 507.646: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 507.646: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 512.697:
> [CMS-concurrent-abortable-preclean: 0.562/5.051 secs] [Times:
> user=0.57 sys=0.00, real=5.05 secs]
> 512.697: [GC[YG occupancy: 23484 K (118016 K)]512.697: [Rescan
> (parallel) , 0.0020030 secs]512.699: [weak refs processing, 0.0000090
> secs] [1 CMS-remark: 12854K(21428K)] 36339K(139444K), 0.0020830 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 512.700: [CMS-concurrent-sweep-start]
> 512.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 512.701: [CMS-concurrent-reset-start]
> 512.709: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 514.710: [GC [1 CMS-initial-mark: 12854K(21428K)] 36468K(139444K),
> 0.0028400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 514.713: [CMS-concurrent-mark-start]
> 514.725: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 514.725: [CMS-concurrent-preclean-start]
> 514.725: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 514.725: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 519.800:
> [CMS-concurrent-abortable-preclean: 0.619/5.075 secs] [Times:
> user=0.66 sys=0.00, real=5.07 secs]
> 519.801: [GC[YG occupancy: 25022 K (118016 K)]519.801: [Rescan
> (parallel) , 0.0023950 secs]519.803: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12854K(21428K)] 37877K(139444K), 0.0024980 secs]
> [Times: user=0.02 sys=0.00, real=0.01 secs]
> 519.803: [CMS-concurrent-sweep-start]
> 519.805: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 519.805: [CMS-concurrent-reset-start]
> 519.813: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 521.814: [GC [1 CMS-initial-mark: 12854K(21428K)] 38005K(139444K),
> 0.0045520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 521.818: [CMS-concurrent-mark-start]
> 521.833: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 521.833: [CMS-concurrent-preclean-start]
> 521.833: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 521.833: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 526.840:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 526.840: [GC[YG occupancy: 25471 K (118016 K)]526.840: [Rescan
> (parallel) , 0.0024440 secs]526.843: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12854K(21428K)] 38326K(139444K), 0.0025440 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 526.843: [CMS-concurrent-sweep-start]
> 526.845: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 526.845: [CMS-concurrent-reset-start]
> 526.853: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 528.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 38449K(139444K),
> 0.0045550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 528.858: [CMS-concurrent-mark-start]
> 528.872: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 528.872: [CMS-concurrent-preclean-start]
> 528.873: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 528.873: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 533.876:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 533.876: [GC[YG occupancy: 25919 K (118016 K)]533.877: [Rescan
> (parallel) , 0.0028370 secs]533.879: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 38769K(139444K), 0.0029390 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 533.880: [CMS-concurrent-sweep-start]
> 533.882: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 533.882: [CMS-concurrent-reset-start]
> 533.891: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 535.891: [GC [1 CMS-initial-mark: 12849K(21428K)] 38897K(139444K),
> 0.0046460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 535.896: [CMS-concurrent-mark-start]
> 535.910: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 535.910: [CMS-concurrent-preclean-start]
> 535.911: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 535.911: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 540.917:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 540.917: [GC[YG occupancy: 26367 K (118016 K)]540.917: [Rescan
> (parallel) , 0.0025680 secs]540.920: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 39217K(139444K), 0.0026690 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 540.920: [CMS-concurrent-sweep-start]
> 540.922: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 540.922: [CMS-concurrent-reset-start]
> 540.930: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 542.466: [GC [1 CMS-initial-mark: 12849K(21428K)] 39555K(139444K),
> 0.0050040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 542.471: [CMS-concurrent-mark-start]
> 542.486: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 542.486: [CMS-concurrent-preclean-start]
> 542.486: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 542.486: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 547.491:
> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 547.491: [GC[YG occupancy: 27066 K (118016 K)]547.491: [Rescan
> (parallel) , 0.0024720 secs]547.494: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 39916K(139444K), 0.0025720 secs]
> [Times: user=0.02 sys=0.00, real=0.01 secs]
> 547.494: [CMS-concurrent-sweep-start]
> 547.496: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 547.496: [CMS-concurrent-reset-start]
> 547.505: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 549.506: [GC [1 CMS-initial-mark: 12849K(21428K)] 40044K(139444K),
> 0.0048760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 549.511: [CMS-concurrent-mark-start]
> 549.524: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 549.524: [CMS-concurrent-preclean-start]
> 549.525: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 549.525: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 554.530:
> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 554.530: [GC[YG occupancy: 27515 K (118016 K)]554.530: [Rescan
> (parallel) , 0.0025270 secs]554.533: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 40364K(139444K), 0.0026190 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 554.533: [CMS-concurrent-sweep-start]
> 554.534: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 554.534: [CMS-concurrent-reset-start]
> 554.542: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 556.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 40493K(139444K),
> 0.0048950 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 556.548: [CMS-concurrent-mark-start]
> 556.562: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 556.562: [CMS-concurrent-preclean-start]
> 556.562: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 556.563: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 561.565:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 561.566: [GC[YG occupancy: 27963 K (118016 K)]561.566: [Rescan
> (parallel) , 0.0025900 secs]561.568: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 40813K(139444K), 0.0026910 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 561.569: [CMS-concurrent-sweep-start]
> 561.570: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 561.570: [CMS-concurrent-reset-start]
> 561.578: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 563.579: [GC [1 CMS-initial-mark: 12849K(21428K)] 40941K(139444K),
> 0.0049390 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 563.584: [CMS-concurrent-mark-start]
> 563.598: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 563.598: [CMS-concurrent-preclean-start]
> 563.598: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 563.598: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 568.693:
> [CMS-concurrent-abortable-preclean: 0.717/5.095 secs] [Times:
> user=0.71 sys=0.00, real=5.09 secs]
> 568.694: [GC[YG occupancy: 28411 K (118016 K)]568.694: [Rescan
> (parallel) , 0.0035750 secs]568.697: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 41261K(139444K), 0.0036740 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 568.698: [CMS-concurrent-sweep-start]
> 568.700: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 568.700: [CMS-concurrent-reset-start]
> 568.709: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 570.709: [GC [1 CMS-initial-mark: 12849K(21428K)] 41389K(139444K),
> 0.0048710 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 570.714: [CMS-concurrent-mark-start]
> 570.729: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 570.729: [CMS-concurrent-preclean-start]
> 570.729: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 570.729: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 575.738:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 575.738: [GC[YG occupancy: 28900 K (118016 K)]575.738: [Rescan
> (parallel) , 0.0036390 secs]575.742: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 41750K(139444K), 0.0037440 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 575.742: [CMS-concurrent-sweep-start]
> 575.744: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 575.744: [CMS-concurrent-reset-start]
> 575.752: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 577.752: [GC [1 CMS-initial-mark: 12849K(21428K)] 41878K(139444K),
> 0.0050100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 577.758: [CMS-concurrent-mark-start]
> 577.772: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 577.772: [CMS-concurrent-preclean-start]
> 577.773: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 577.773: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 582.779:
> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 582.779: [GC[YG occupancy: 29348 K (118016 K)]582.779: [Rescan
> (parallel) , 0.0026100 secs]582.782: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 42198K(139444K), 0.0027110 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 582.782: [CMS-concurrent-sweep-start]
> 582.784: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 582.784: [CMS-concurrent-reset-start]
> 582.792: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 584.792: [GC [1 CMS-initial-mark: 12849K(21428K)] 42326K(139444K),
> 0.0050510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 584.798: [CMS-concurrent-mark-start]
> 584.812: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 584.812: [CMS-concurrent-preclean-start]
> 584.813: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 584.813: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 589.819:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 589.819: [GC[YG occupancy: 29797 K (118016 K)]589.819: [Rescan
> (parallel) , 0.0039510 secs]589.823: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 42647K(139444K), 0.0040460 secs]
> [Times: user=0.03 sys=0.00, real=0.01 secs]
> 589.824: [CMS-concurrent-sweep-start]
> 589.826: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 589.826: [CMS-concurrent-reset-start]
> 589.835: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 591.835: [GC [1 CMS-initial-mark: 12849K(21428K)] 42775K(139444K),
> 0.0050090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 591.840: [CMS-concurrent-mark-start]
> 591.855: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 591.855: [CMS-concurrent-preclean-start]
> 591.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 591.855: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 596.857:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 596.857: [GC[YG occupancy: 31414 K (118016 K)]596.857: [Rescan
> (parallel) , 0.0028500 secs]596.860: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 44264K(139444K), 0.0029480 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 596.861: [CMS-concurrent-sweep-start]
> 596.862: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 596.862: [CMS-concurrent-reset-start]
> 596.870: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 598.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 44392K(139444K),
> 0.0050640 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 598.876: [CMS-concurrent-mark-start]
> 598.890: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 598.890: [CMS-concurrent-preclean-start]
> 598.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 598.891: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 603.897:
> [CMS-concurrent-abortable-preclean: 0.705/5.007 secs] [Times:
> user=0.72 sys=0.00, real=5.01 secs]
> 603.898: [GC[YG occupancy: 32032 K (118016 K)]603.898: [Rescan
> (parallel) , 0.0039660 secs]603.902: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 44882K(139444K), 0.0040680 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 603.902: [CMS-concurrent-sweep-start]
> 603.903: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 603.903: [CMS-concurrent-reset-start]
> 603.912: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 605.912: [GC [1 CMS-initial-mark: 12849K(21428K)] 45010K(139444K),
> 0.0053650 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 605.918: [CMS-concurrent-mark-start]
> 605.932: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 605.932: [CMS-concurrent-preclean-start]
> 605.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 605.932: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 610.939:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 610.940: [GC[YG occupancy: 32481 K (118016 K)]610.940: [Rescan
> (parallel) , 0.0032540 secs]610.943: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 45330K(139444K), 0.0033560 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 610.943: [CMS-concurrent-sweep-start]
> 610.944: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 610.945: [CMS-concurrent-reset-start]
> 610.953: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 612.486: [GC [1 CMS-initial-mark: 12849K(21428K)] 45459K(139444K),
> 0.0055070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 612.492: [CMS-concurrent-mark-start]
> 612.505: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 612.505: [CMS-concurrent-preclean-start]
> 612.506: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 612.506: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 617.511:
> [CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 617.512: [GC[YG occupancy: 32929 K (118016 K)]617.512: [Rescan
> (parallel) , 0.0037500 secs]617.516: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 45779K(139444K), 0.0038560 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 617.516: [CMS-concurrent-sweep-start]
> 617.518: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 617.518: [CMS-concurrent-reset-start]
> 617.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 619.528: [GC [1 CMS-initial-mark: 12849K(21428K)] 45907K(139444K),
> 0.0053320 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 619.533: [CMS-concurrent-mark-start]
> 619.546: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 619.546: [CMS-concurrent-preclean-start]
> 619.547: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 619.547: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 624.552:
> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 624.552: [GC[YG occupancy: 33377 K (118016 K)]624.552: [Rescan
> (parallel) , 0.0037290 secs]624.556: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 12849K(21428K)] 46227K(139444K), 0.0038330 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 624.556: [CMS-concurrent-sweep-start]
> 624.558: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 624.558: [CMS-concurrent-reset-start]
> 624.568: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 626.568: [GC [1 CMS-initial-mark: 12849K(21428K)] 46355K(139444K),
> 0.0054240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 626.574: [CMS-concurrent-mark-start]
> 626.588: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 626.588: [CMS-concurrent-preclean-start]
> 626.588: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 626.588: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 631.592:
> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 631.592: [GC[YG occupancy: 33825 K (118016 K)]631.593: [Rescan
> (parallel) , 0.0041600 secs]631.597: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 46675K(139444K), 0.0042650 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 631.597: [CMS-concurrent-sweep-start]
> 631.598: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 631.598: [CMS-concurrent-reset-start]
> 631.607: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 632.495: [GC [1 CMS-initial-mark: 12849K(21428K)] 46839K(139444K),
> 0.0054380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 632.501: [CMS-concurrent-mark-start]
> 632.516: [CMS-concurrent-mark: 0.014/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 632.516: [CMS-concurrent-preclean-start]
> 632.517: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 632.517: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 637.519:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 637.519: [GC[YG occupancy: 34350 K (118016 K)]637.519: [Rescan
> (parallel) , 0.0025310 secs]637.522: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 47200K(139444K), 0.0026540 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 637.522: [CMS-concurrent-sweep-start]
> 637.523: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 637.523: [CMS-concurrent-reset-start]
> 637.532: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 639.532: [GC [1 CMS-initial-mark: 12849K(21428K)] 47328K(139444K),
> 0.0055330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 639.538: [CMS-concurrent-mark-start]
> 639.551: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 639.551: [CMS-concurrent-preclean-start]
> 639.552: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 639.552: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 644.561:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 644.561: [GC[YG occupancy: 34798 K (118016 K)]644.561: [Rescan
> (parallel) , 0.0040620 secs]644.565: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 47648K(139444K), 0.0041610 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 644.566: [CMS-concurrent-sweep-start]
> 644.568: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 644.568: [CMS-concurrent-reset-start]
> 644.577: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 646.577: [GC [1 CMS-initial-mark: 12849K(21428K)] 47776K(139444K),
> 0.0054660 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 646.583: [CMS-concurrent-mark-start]
> 646.596: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 646.596: [CMS-concurrent-preclean-start]
> 646.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 646.597: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 651.678:
> [CMS-concurrent-abortable-preclean: 0.732/5.081 secs] [Times:
> user=0.74 sys=0.00, real=5.08 secs]
> 651.678: [GC[YG occupancy: 35246 K (118016 K)]651.678: [Rescan
> (parallel) , 0.0025920 secs]651.681: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 48096K(139444K), 0.0026910 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 651.681: [CMS-concurrent-sweep-start]
> 651.682: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 651.682: [CMS-concurrent-reset-start]
> 651.690: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 653.691: [GC [1 CMS-initial-mark: 12849K(21428K)] 48224K(139444K),
> 0.0055640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 653.696: [CMS-concurrent-mark-start]
> 653.711: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 653.711: [CMS-concurrent-preclean-start]
> 653.711: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 653.711: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 658.721:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 658.721: [GC[YG occupancy: 35695 K (118016 K)]658.721: [Rescan
> (parallel) , 0.0040160 secs]658.725: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 48545K(139444K), 0.0041130 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 658.725: [CMS-concurrent-sweep-start]
> 658.727: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 658.728: [CMS-concurrent-reset-start]
> 658.737: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 660.737: [GC [1 CMS-initial-mark: 12849K(21428K)] 48673K(139444K),
> 0.0055230 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 660.743: [CMS-concurrent-mark-start]
> 660.756: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 660.756: [CMS-concurrent-preclean-start]
> 660.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 660.757: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 665.767:
> [CMS-concurrent-abortable-preclean: 0.704/5.011 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 665.768: [GC[YG occupancy: 36289 K (118016 K)]665.768: [Rescan
> (parallel) , 0.0033040 secs]665.771: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 49139K(139444K), 0.0034090 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 665.771: [CMS-concurrent-sweep-start]
> 665.773: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 665.773: [CMS-concurrent-reset-start]
> 665.781: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 667.781: [GC [1 CMS-initial-mark: 12849K(21428K)] 49267K(139444K),
> 0.0057830 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 667.787: [CMS-concurrent-mark-start]
> 667.802: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 667.802: [CMS-concurrent-preclean-start]
> 667.802: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 667.802: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 672.809:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 672.810: [GC[YG occupancy: 36737 K (118016 K)]672.810: [Rescan
> (parallel) , 0.0037010 secs]672.813: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 49587K(139444K), 0.0038010 secs]
> [Times: user=0.03 sys=0.00, real=0.01 secs]
> 672.814: [CMS-concurrent-sweep-start]
> 672.815: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 672.815: [CMS-concurrent-reset-start]
> 672.824: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 674.824: [GC [1 CMS-initial-mark: 12849K(21428K)] 49715K(139444K),
> 0.0058920 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
> 674.830: [CMS-concurrent-mark-start]
> 674.845: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 674.845: [CMS-concurrent-preclean-start]
> 674.845: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 674.845: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 679.849:
> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 679.850: [GC[YG occupancy: 37185 K (118016 K)]679.850: [Rescan
> (parallel) , 0.0033420 secs]679.853: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 50035K(139444K), 0.0034440 secs]
> [Times: user=0.02 sys=0.00, real=0.01 secs]
> 679.853: [CMS-concurrent-sweep-start]
> 679.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 679.855: [CMS-concurrent-reset-start]
> 679.863: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 681.864: [GC [1 CMS-initial-mark: 12849K(21428K)] 50163K(139444K),
> 0.0058780 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 681.870: [CMS-concurrent-mark-start]
> 681.884: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 681.884: [CMS-concurrent-preclean-start]
> 681.884: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 681.884: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 686.890:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 686.891: [GC[YG occupancy: 37634 K (118016 K)]686.891: [Rescan
> (parallel) , 0.0044480 secs]686.895: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 50483K(139444K), 0.0045570 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 686.896: [CMS-concurrent-sweep-start]
> 686.897: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 686.897: [CMS-concurrent-reset-start]
> 686.905: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 688.905: [GC [1 CMS-initial-mark: 12849K(21428K)] 50612K(139444K),
> 0.0058940 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 688.911: [CMS-concurrent-mark-start]
> 688.925: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 688.925: [CMS-concurrent-preclean-start]
> 688.925: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 688.926: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 694.041:
> [CMS-concurrent-abortable-preclean: 0.718/5.115 secs] [Times:
> user=0.72 sys=0.00, real=5.11 secs]
> 694.041: [GC[YG occupancy: 38122 K (118016 K)]694.041: [Rescan
> (parallel) , 0.0028640 secs]694.044: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 50972K(139444K), 0.0029660 secs]
> [Times: user=0.03 sys=0.00, real=0.01 secs]
> 694.044: [CMS-concurrent-sweep-start]
> 694.046: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 694.046: [CMS-concurrent-reset-start]
> 694.054: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 696.054: [GC [1 CMS-initial-mark: 12849K(21428K)] 51100K(139444K),
> 0.0060550 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 696.060: [CMS-concurrent-mark-start]
> 696.074: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 696.074: [CMS-concurrent-preclean-start]
> 696.075: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 696.075: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 701.078:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 701.079: [GC[YG occupancy: 38571 K (118016 K)]701.079: [Rescan
> (parallel) , 0.0064210 secs]701.085: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 51421K(139444K), 0.0065220 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 701.085: [CMS-concurrent-sweep-start]
> 701.087: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 701.088: [CMS-concurrent-reset-start]
> 701.097: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 703.097: [GC [1 CMS-initial-mark: 12849K(21428K)] 51549K(139444K),
> 0.0058470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 703.103: [CMS-concurrent-mark-start]
> 703.116: [CMS-concurrent-mark: 0.013/0.013 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 703.116: [CMS-concurrent-preclean-start]
> 703.117: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 703.117: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 708.125:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 708.125: [GC[YG occupancy: 39054 K (118016 K)]708.125: [Rescan
> (parallel) , 0.0037190 secs]708.129: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 51904K(139444K), 0.0038220 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 708.129: [CMS-concurrent-sweep-start]
> 708.131: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 708.131: [CMS-concurrent-reset-start]
> 708.139: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 710.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 52032K(139444K),
> 0.0059770 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 710.145: [CMS-concurrent-mark-start]
> 710.158: [CMS-concurrent-mark: 0.012/0.012 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 710.158: [CMS-concurrent-preclean-start]
> 710.158: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 710.158: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 715.169:
> [CMS-concurrent-abortable-preclean: 0.705/5.011 secs] [Times:
> user=0.69 sys=0.01, real=5.01 secs]
> 715.169: [GC[YG occupancy: 39503 K (118016 K)]715.169: [Rescan
> (parallel) , 0.0042370 secs]715.173: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 52353K(139444K), 0.0043410 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 715.174: [CMS-concurrent-sweep-start]
> 715.176: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 715.176: [CMS-concurrent-reset-start]
> 715.185: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 717.185: [GC [1 CMS-initial-mark: 12849K(21428K)] 52481K(139444K),
> 0.0060050 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 717.191: [CMS-concurrent-mark-start]
> 717.205: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 717.205: [CMS-concurrent-preclean-start]
> 717.206: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 717.206: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 722.209:
> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
> user=0.71 sys=0.00, real=5.00 secs]
> 722.210: [GC[YG occupancy: 40161 K (118016 K)]722.210: [Rescan
> (parallel) , 0.0041630 secs]722.214: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 53011K(139444K), 0.0042630 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 722.214: [CMS-concurrent-sweep-start]
> 722.216: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 722.216: [CMS-concurrent-reset-start]
> 722.226: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 722.521: [GC [1 CMS-initial-mark: 12849K(21428K)] 53099K(139444K),
> 0.0062380 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 722.528: [CMS-concurrent-mark-start]
> 722.544: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.05
> sys=0.01, real=0.02 secs]
> 722.544: [CMS-concurrent-preclean-start]
> 722.544: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 722.544: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 727.558:
> [CMS-concurrent-abortable-preclean: 0.709/5.014 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 727.558: [GC[YG occupancy: 40610 K (118016 K)]727.558: [Rescan
> (parallel) , 0.0041700 secs]727.563: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 53460K(139444K), 0.0042780 secs]
> [Times: user=0.05 sys=0.00, real=0.00 secs]
> 727.563: [CMS-concurrent-sweep-start]
> 727.564: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 727.564: [CMS-concurrent-reset-start]
> 727.573: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.02 secs]
> 729.574: [GC [1 CMS-initial-mark: 12849K(21428K)] 53588K(139444K),
> 0.0062700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 729.580: [CMS-concurrent-mark-start]
> 729.595: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.02 secs]
> 729.595: [CMS-concurrent-preclean-start]
> 729.595: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 729.595: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 734.597:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 734.597: [GC[YG occupancy: 41058 K (118016 K)]734.597: [Rescan
> (parallel) , 0.0053870 secs]734.603: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 53908K(139444K), 0.0054870 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 734.603: [CMS-concurrent-sweep-start]
> 734.604: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 734.604: [CMS-concurrent-reset-start]
> 734.614: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 734.877: [GC [1 CMS-initial-mark: 12849K(21428K)] 53908K(139444K),
> 0.0067230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 734.884: [CMS-concurrent-mark-start]
> 734.899: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 734.899: [CMS-concurrent-preclean-start]
> 734.899: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 734.899: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 739.905:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 739.906: [GC[YG occupancy: 41379 K (118016 K)]739.906: [Rescan
> (parallel) , 0.0050680 secs]739.911: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 54228K(139444K), 0.0051690 secs]
> [Times: user=0.05 sys=0.00, real=0.00 secs]
> 739.911: [CMS-concurrent-sweep-start]
> 739.912: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 739.912: [CMS-concurrent-reset-start]
> 739.921: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 741.922: [GC [1 CMS-initial-mark: 12849K(21428K)] 54356K(139444K),
> 0.0062880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 741.928: [CMS-concurrent-mark-start]
> 741.942: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 741.942: [CMS-concurrent-preclean-start]
> 741.943: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 741.943: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 747.059:
> [CMS-concurrent-abortable-preclean: 0.711/5.117 secs] [Times:
> user=0.71 sys=0.00, real=5.12 secs]
> 747.060: [GC[YG occupancy: 41827 K (118016 K)]747.060: [Rescan
> (parallel) , 0.0051040 secs]747.065: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 54677K(139444K), 0.0052090 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 747.065: [CMS-concurrent-sweep-start]
> 747.067: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 747.067: [CMS-concurrent-reset-start]
> 747.075: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 749.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 54805K(139444K),
> 0.0063470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 749.082: [CMS-concurrent-mark-start]
> 749.095: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 749.095: [CMS-concurrent-preclean-start]
> 749.096: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 749.096: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 754.175:
> [CMS-concurrent-abortable-preclean: 0.718/5.079 secs] [Times:
> user=0.72 sys=0.00, real=5.08 secs]
> 754.175: [GC[YG occupancy: 42423 K (118016 K)]754.175: [Rescan
> (parallel) , 0.0051290 secs]754.180: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 55273K(139444K), 0.0052290 secs]
> [Times: user=0.05 sys=0.00, real=0.00 secs]
> 754.181: [CMS-concurrent-sweep-start]
> 754.182: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 754.182: [CMS-concurrent-reset-start]
> 754.191: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 756.191: [GC [1 CMS-initial-mark: 12849K(21428K)] 55401K(139444K),
> 0.0064020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 756.198: [CMS-concurrent-mark-start]
> 756.212: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 756.212: [CMS-concurrent-preclean-start]
> 756.213: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 756.213: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 761.217:
> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 761.218: [GC[YG occupancy: 42871 K (118016 K)]761.218: [Rescan
> (parallel) , 0.0052310 secs]761.223: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 55721K(139444K), 0.0053300 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 761.223: [CMS-concurrent-sweep-start]
> 761.225: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 761.225: [CMS-concurrent-reset-start]
> 761.234: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 763.234: [GC [1 CMS-initial-mark: 12849K(21428K)] 55849K(139444K),
> 0.0045400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 763.239: [CMS-concurrent-mark-start]
> 763.253: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 763.253: [CMS-concurrent-preclean-start]
> 763.253: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 763.253: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 768.348:
> [CMS-concurrent-abortable-preclean: 0.690/5.095 secs] [Times:
> user=0.69 sys=0.00, real=5.10 secs]
> 768.349: [GC[YG occupancy: 43320 K (118016 K)]768.349: [Rescan
> (parallel) , 0.0045140 secs]768.353: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 56169K(139444K), 0.0046170 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 768.353: [CMS-concurrent-sweep-start]
> 768.356: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 768.356: [CMS-concurrent-reset-start]
> 768.365: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 770.365: [GC [1 CMS-initial-mark: 12849K(21428K)] 56298K(139444K),
> 0.0063950 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 770.372: [CMS-concurrent-mark-start]
> 770.388: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 770.388: [CMS-concurrent-preclean-start]
> 770.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 770.388: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 775.400:
> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 775.401: [GC[YG occupancy: 43768 K (118016 K)]775.401: [Rescan
> (parallel) , 0.0043990 secs]775.405: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 56618K(139444K), 0.0045000 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 775.405: [CMS-concurrent-sweep-start]
> 775.407: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 775.407: [CMS-concurrent-reset-start]
> 775.417: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 777.417: [GC [1 CMS-initial-mark: 12849K(21428K)] 56746K(139444K),
> 0.0064580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 777.423: [CMS-concurrent-mark-start]
> 777.438: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 777.438: [CMS-concurrent-preclean-start]
> 777.439: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 777.439: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 782.448:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 782.448: [GC[YG occupancy: 44321 K (118016 K)]782.448: [Rescan
> (parallel) , 0.0054760 secs]782.454: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 57171K(139444K), 0.0055780 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 782.454: [CMS-concurrent-sweep-start]
> 782.455: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 782.455: [CMS-concurrent-reset-start]
> 782.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 782.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 57235K(139444K),
> 0.0066970 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 782.550: [CMS-concurrent-mark-start]
> 782.567: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 782.567: [CMS-concurrent-preclean-start]
> 782.568: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 782.568: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 787.574:
> [CMS-concurrent-abortable-preclean: 0.700/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 787.574: [GC[YG occupancy: 44746 K (118016 K)]787.574: [Rescan
> (parallel) , 0.0049170 secs]787.579: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 57596K(139444K), 0.0050210 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 787.579: [CMS-concurrent-sweep-start]
> 787.581: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 787.581: [CMS-concurrent-reset-start]
> 787.590: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 789.591: [GC [1 CMS-initial-mark: 12849K(21428K)] 57724K(139444K),
> 0.0066850 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 789.598: [CMS-concurrent-mark-start]
> 789.614: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 789.614: [CMS-concurrent-preclean-start]
> 789.615: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 789.615: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 794.626:
> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 794.627: [GC[YG occupancy: 45195 K (118016 K)]794.627: [Rescan
> (parallel) , 0.0056520 secs]794.632: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 58044K(139444K), 0.0057510 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 794.632: [CMS-concurrent-sweep-start]
> 794.634: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 794.634: [CMS-concurrent-reset-start]
> 794.643: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 796.643: [GC [1 CMS-initial-mark: 12849K(21428K)] 58172K(139444K),
> 0.0067410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 796.650: [CMS-concurrent-mark-start]
> 796.666: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 796.666: [CMS-concurrent-preclean-start]
> 796.667: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 796.667: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 801.670:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 801.670: [GC[YG occupancy: 45643 K (118016 K)]801.670: [Rescan
> (parallel) , 0.0043550 secs]801.675: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 58493K(139444K), 0.0044580 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 801.675: [CMS-concurrent-sweep-start]
> 801.677: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 801.677: [CMS-concurrent-reset-start]
> 801.686: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 803.686: [GC [1 CMS-initial-mark: 12849K(21428K)] 58621K(139444K),
> 0.0067250 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 803.693: [CMS-concurrent-mark-start]
> 803.708: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 803.708: [CMS-concurrent-preclean-start]
> 803.709: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 803.709: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 808.717:
> [CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 808.717: [GC[YG occupancy: 46091 K (118016 K)]808.717: [Rescan
> (parallel) , 0.0034790 secs]808.720: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 58941K(139444K), 0.0035820 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 808.721: [CMS-concurrent-sweep-start]
> 808.722: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 808.722: [CMS-concurrent-reset-start]
> 808.730: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 810.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 59069K(139444K),
> 0.0067580 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 810.738: [CMS-concurrent-mark-start]
> 810.755: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 810.755: [CMS-concurrent-preclean-start]
> 810.755: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 810.755: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 815.823:
> [CMS-concurrent-abortable-preclean: 0.715/5.068 secs] [Times:
> user=0.72 sys=0.00, real=5.06 secs]
> 815.824: [GC[YG occupancy: 46580 K (118016 K)]815.824: [Rescan
> (parallel) , 0.0048490 secs]815.829: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 59430K(139444K), 0.0049600 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 815.829: [CMS-concurrent-sweep-start]
> 815.831: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 815.831: [CMS-concurrent-reset-start]
> 815.840: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 817.840: [GC [1 CMS-initial-mark: 12849K(21428K)] 59558K(139444K),
> 0.0068880 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 817.847: [CMS-concurrent-mark-start]
> 817.864: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 817.864: [CMS-concurrent-preclean-start]
> 817.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 817.865: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 822.868:
> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 822.868: [GC[YG occupancy: 47028 K (118016 K)]822.868: [Rescan
> (parallel) , 0.0061120 secs]822.874: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 59878K(139444K), 0.0062150 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 822.874: [CMS-concurrent-sweep-start]
> 822.876: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 822.876: [CMS-concurrent-reset-start]
> 822.885: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 824.885: [GC [1 CMS-initial-mark: 12849K(21428K)] 60006K(139444K),
> 0.0068610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 824.892: [CMS-concurrent-mark-start]
> 824.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 824.908: [CMS-concurrent-preclean-start]
> 824.908: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 824.908: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 829.914:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 829.915: [GC[YG occupancy: 47477 K (118016 K)]829.915: [Rescan
> (parallel) , 0.0034890 secs]829.918: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 60327K(139444K), 0.0035930 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 829.918: [CMS-concurrent-sweep-start]
> 829.920: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 829.920: [CMS-concurrent-reset-start]
> 829.930: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 831.930: [GC [1 CMS-initial-mark: 12849K(21428K)] 60455K(139444K),
> 0.0069040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 831.937: [CMS-concurrent-mark-start]
> 831.953: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 831.953: [CMS-concurrent-preclean-start]
> 831.954: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 831.954: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 836.957:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.71 sys=0.00, real=5.00 secs]
> 836.957: [GC[YG occupancy: 47925 K (118016 K)]836.957: [Rescan
> (parallel) , 0.0060440 secs]836.963: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 60775K(139444K), 0.0061520 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 836.964: [CMS-concurrent-sweep-start]
> 836.965: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 836.965: [CMS-concurrent-reset-start]
> 836.974: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 838.974: [GC [1 CMS-initial-mark: 12849K(21428K)] 60903K(139444K),
> 0.0069860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 838.982: [CMS-concurrent-mark-start]
> 838.997: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 838.998: [CMS-concurrent-preclean-start]
> 838.998: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 838.998: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 844.091:
> [CMS-concurrent-abortable-preclean: 0.718/5.093 secs] [Times:
> user=0.72 sys=0.00, real=5.09 secs]
> 844.092: [GC[YG occupancy: 48731 K (118016 K)]844.092: [Rescan
> (parallel) , 0.0052610 secs]844.097: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 61581K(139444K), 0.0053620 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 844.097: [CMS-concurrent-sweep-start]
> 844.099: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 844.099: [CMS-concurrent-reset-start]
> 844.108: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 846.109: [GC [1 CMS-initial-mark: 12849K(21428K)] 61709K(139444K),
> 0.0071980 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 846.116: [CMS-concurrent-mark-start]
> 846.133: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 846.133: [CMS-concurrent-preclean-start]
> 846.134: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 846.134: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 851.137:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 851.137: [GC[YG occupancy: 49180 K (118016 K)]851.137: [Rescan
> (parallel) , 0.0061320 secs]851.143: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 62030K(139444K), 0.0062320 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 851.144: [CMS-concurrent-sweep-start]
> 851.145: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 851.145: [CMS-concurrent-reset-start]
> 851.154: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 853.154: [GC [1 CMS-initial-mark: 12849K(21428K)] 62158K(139444K),
> 0.0071610 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 853.162: [CMS-concurrent-mark-start]
> 853.177: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 853.177: [CMS-concurrent-preclean-start]
> 853.178: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 853.178: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 858.181:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 858.181: [GC[YG occupancy: 49628 K (118016 K)]858.181: [Rescan
> (parallel) , 0.0029560 secs]858.184: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 62478K(139444K), 0.0030590 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 858.184: [CMS-concurrent-sweep-start]
> 858.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 858.186: [CMS-concurrent-reset-start]
> 858.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 860.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 62606K(139444K),
> 0.0072070 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 860.203: [CMS-concurrent-mark-start]
> 860.219: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 860.219: [CMS-concurrent-preclean-start]
> 860.219: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 860.219: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 865.226:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 865.227: [GC[YG occupancy: 50076 K (118016 K)]865.227: [Rescan
> (parallel) , 0.0066610 secs]865.233: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 62926K(139444K), 0.0067670 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 865.233: [CMS-concurrent-sweep-start]
> 865.235: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 865.235: [CMS-concurrent-reset-start]
> 865.244: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 867.244: [GC [1 CMS-initial-mark: 12849K(21428K)] 63054K(139444K),
> 0.0072490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 867.252: [CMS-concurrent-mark-start]
> 867.267: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 867.267: [CMS-concurrent-preclean-start]
> 867.268: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 867.268: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 872.281:
> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 872.281: [GC[YG occupancy: 50525 K (118016 K)]872.281: [Rescan
> (parallel) , 0.0053780 secs]872.286: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 63375K(139444K), 0.0054790 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 872.287: [CMS-concurrent-sweep-start]
> 872.288: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 872.288: [CMS-concurrent-reset-start]
> 872.296: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 872.572: [GC [1 CMS-initial-mark: 12849K(21428K)] 63439K(139444K),
> 0.0073060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 872.580: [CMS-concurrent-mark-start]
> 872.597: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 872.597: [CMS-concurrent-preclean-start]
> 872.597: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 872.597: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 877.600:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 877.601: [GC[YG occupancy: 51049 K (118016 K)]877.601: [Rescan
> (parallel) , 0.0063070 secs]877.607: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 63899K(139444K), 0.0064090 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 877.607: [CMS-concurrent-sweep-start]
> 877.609: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 877.609: [CMS-concurrent-reset-start]
> 877.619: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 879.619: [GC [1 CMS-initial-mark: 12849K(21428K)] 64027K(139444K),
> 0.0073320 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 879.626: [CMS-concurrent-mark-start]
> 879.643: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 879.643: [CMS-concurrent-preclean-start]
> 879.644: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 879.644: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 884.657:
> [CMS-concurrent-abortable-preclean: 0.708/5.014 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 884.658: [GC[YG occupancy: 51497 K (118016 K)]884.658: [Rescan
> (parallel) , 0.0056160 secs]884.663: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 64347K(139444K), 0.0057150 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 884.663: [CMS-concurrent-sweep-start]
> 884.665: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 884.665: [CMS-concurrent-reset-start]
> 884.674: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 886.674: [GC [1 CMS-initial-mark: 12849K(21428K)] 64475K(139444K),
> 0.0073420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 886.682: [CMS-concurrent-mark-start]
> 886.698: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 886.698: [CMS-concurrent-preclean-start]
> 886.698: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 886.698: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 891.702:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 891.702: [GC[YG occupancy: 51945 K (118016 K)]891.702: [Rescan
> (parallel) , 0.0070120 secs]891.709: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 64795K(139444K), 0.0071150 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 891.709: [CMS-concurrent-sweep-start]
> 891.711: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 891.711: [CMS-concurrent-reset-start]
> 891.721: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 893.721: [GC [1 CMS-initial-mark: 12849K(21428K)] 64923K(139444K),
> 0.0073880 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 893.728: [CMS-concurrent-mark-start]
> 893.745: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 893.745: [CMS-concurrent-preclean-start]
> 893.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 893.745: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 898.852:
> [CMS-concurrent-abortable-preclean: 0.715/5.107 secs] [Times:
> user=0.71 sys=0.00, real=5.10 secs]
> 898.853: [GC[YG occupancy: 53466 K (118016 K)]898.853: [Rescan
> (parallel) , 0.0060600 secs]898.859: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 66315K(139444K), 0.0061640 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 898.859: [CMS-concurrent-sweep-start]
> 898.861: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 898.861: [CMS-concurrent-reset-start]
> 898.870: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 900.871: [GC [1 CMS-initial-mark: 12849K(21428K)] 66444K(139444K),
> 0.0074670 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 900.878: [CMS-concurrent-mark-start]
> 900.895: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 900.895: [CMS-concurrent-preclean-start]
> 900.896: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 900.896: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 905.969:
> [CMS-concurrent-abortable-preclean: 0.716/5.074 secs] [Times:
> user=0.72 sys=0.01, real=5.07 secs]
> 905.969: [GC[YG occupancy: 54157 K (118016 K)]905.970: [Rescan
> (parallel) , 0.0068200 secs]905.976: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 67007K(139444K), 0.0069250 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 905.977: [CMS-concurrent-sweep-start]
> 905.978: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 905.978: [CMS-concurrent-reset-start]
> 905.986: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 907.986: [GC [1 CMS-initial-mark: 12849K(21428K)] 67135K(139444K),
> 0.0076010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 907.994: [CMS-concurrent-mark-start]
> 908.009: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 908.009: [CMS-concurrent-preclean-start]
> 908.010: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 908.010: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 913.013:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 913.013: [GC[YG occupancy: 54606 K (118016 K)]913.013: [Rescan
> (parallel) , 0.0053650 secs]913.018: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 67455K(139444K), 0.0054650 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 913.019: [CMS-concurrent-sweep-start]
> 913.021: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 913.021: [CMS-concurrent-reset-start]
> 913.030: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 915.030: [GC [1 CMS-initial-mark: 12849K(21428K)] 67583K(139444K),
> 0.0076410 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 915.038: [CMS-concurrent-mark-start]
> 915.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 915.055: [CMS-concurrent-preclean-start]
> 915.056: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 915.056: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 920.058:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 920.058: [GC[YG occupancy: 55054 K (118016 K)]920.058: [Rescan
> (parallel) , 0.0058380 secs]920.064: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 67904K(139444K), 0.0059420 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 920.064: [CMS-concurrent-sweep-start]
> 920.066: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 920.066: [CMS-concurrent-reset-start]
> 920.075: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.01, real=0.01 secs]
> 922.075: [GC [1 CMS-initial-mark: 12849K(21428K)] 68032K(139444K),
> 0.0075820 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 922.083: [CMS-concurrent-mark-start]
> 922.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 922.098: [CMS-concurrent-preclean-start]
> 922.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 922.099: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 927.102:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 927.102: [GC[YG occupancy: 55502 K (118016 K)]927.102: [Rescan
> (parallel) , 0.0059190 secs]927.108: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 68352K(139444K), 0.0060220 secs]
> [Times: user=0.06 sys=0.01, real=0.01 secs]
> 927.108: [CMS-concurrent-sweep-start]
> 927.110: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 927.110: [CMS-concurrent-reset-start]
> 927.120: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 929.120: [GC [1 CMS-initial-mark: 12849K(21428K)] 68480K(139444K),
> 0.0077620 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 929.128: [CMS-concurrent-mark-start]
> 929.145: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 929.145: [CMS-concurrent-preclean-start]
> 929.145: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 929.145: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 934.237:
> [CMS-concurrent-abortable-preclean: 0.717/5.092 secs] [Times:
> user=0.72 sys=0.00, real=5.09 secs]
> 934.238: [GC[YG occupancy: 55991 K (118016 K)]934.238: [Rescan
> (parallel) , 0.0042660 secs]934.242: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 68841K(139444K), 0.0043660 secs]
> [Times: user=0.05 sys=0.00, real=0.00 secs]
> 934.242: [CMS-concurrent-sweep-start]
> 934.244: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 934.244: [CMS-concurrent-reset-start]
> 934.252: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 936.253: [GC [1 CMS-initial-mark: 12849K(21428K)] 68969K(139444K),
> 0.0077340 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 936.261: [CMS-concurrent-mark-start]
> 936.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 936.277: [CMS-concurrent-preclean-start]
> 936.278: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 936.278: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 941.284:
> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 941.284: [GC[YG occupancy: 56439 K (118016 K)]941.284: [Rescan
> (parallel) , 0.0059460 secs]941.290: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 69289K(139444K), 0.0060470 secs]
> [Times: user=0.08 sys=0.00, real=0.00 secs]
> 941.290: [CMS-concurrent-sweep-start]
> 941.293: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 941.293: [CMS-concurrent-reset-start]
> 941.302: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 943.302: [GC [1 CMS-initial-mark: 12849K(21428K)] 69417K(139444K),
> 0.0077760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 943.310: [CMS-concurrent-mark-start]
> 943.326: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 943.326: [CMS-concurrent-preclean-start]
> 943.327: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 943.327: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 948.340:
> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 948.340: [GC[YG occupancy: 56888 K (118016 K)]948.340: [Rescan
> (parallel) , 0.0047760 secs]948.345: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 69738K(139444K), 0.0048770 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 948.345: [CMS-concurrent-sweep-start]
> 948.347: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 948.347: [CMS-concurrent-reset-start]
> 948.356: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 950.356: [GC [1 CMS-initial-mark: 12849K(21428K)] 69866K(139444K),
> 0.0077750 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 950.364: [CMS-concurrent-mark-start]
> 950.380: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 950.380: [CMS-concurrent-preclean-start]
> 950.380: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 950.380: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 955.384:
> [CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 955.384: [GC[YG occupancy: 57336 K (118016 K)]955.384: [Rescan
> (parallel) , 0.0072540 secs]955.392: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 70186K(139444K), 0.0073540 secs]
> [Times: user=0.08 sys=0.00, real=0.00 secs]
> 955.392: [CMS-concurrent-sweep-start]
> 955.394: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 955.394: [CMS-concurrent-reset-start]
> 955.403: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 957.403: [GC [1 CMS-initial-mark: 12849K(21428K)] 70314K(139444K),
> 0.0078120 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 957.411: [CMS-concurrent-mark-start]
> 957.427: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 957.427: [CMS-concurrent-preclean-start]
> 957.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 957.427: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 962.437:
> [CMS-concurrent-abortable-preclean: 0.704/5.010 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 962.437: [GC[YG occupancy: 57889 K (118016 K)]962.437: [Rescan
> (parallel) , 0.0076140 secs]962.445: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 70739K(139444K), 0.0077160 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 962.445: [CMS-concurrent-sweep-start]
> 962.446: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 962.446: [CMS-concurrent-reset-start]
> 962.456: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 962.599: [GC [1 CMS-initial-mark: 12849K(21428K)] 70827K(139444K),
> 0.0081180 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 962.608: [CMS-concurrent-mark-start]
> 962.626: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 962.626: [CMS-concurrent-preclean-start]
> 962.626: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 962.626: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 967.632:
> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 967.632: [GC[YG occupancy: 58338 K (118016 K)]967.632: [Rescan
> (parallel) , 0.0061170 secs]967.638: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 71188K(139444K), 0.0062190 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 967.638: [CMS-concurrent-sweep-start]
> 967.640: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 967.640: [CMS-concurrent-reset-start]
> 967.648: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 969.648: [GC [1 CMS-initial-mark: 12849K(21428K)] 71316K(139444K),
> 0.0081110 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 969.656: [CMS-concurrent-mark-start]
> 969.674: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 969.674: [CMS-concurrent-preclean-start]
> 969.674: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 969.674: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 974.677:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 974.677: [GC[YG occupancy: 58786 K (118016 K)]974.677: [Rescan
> (parallel) , 0.0070810 secs]974.685: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 71636K(139444K), 0.0072050 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 974.685: [CMS-concurrent-sweep-start]
> 974.686: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 974.686: [CMS-concurrent-reset-start]
> 974.695: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 976.696: [GC [1 CMS-initial-mark: 12849K(21428K)] 71764K(139444K),
> 0.0080650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 976.704: [CMS-concurrent-mark-start]
> 976.719: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 976.719: [CMS-concurrent-preclean-start]
> 976.719: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 976.719: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 981.727:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.69 sys=0.01, real=5.01 secs]
> 981.727: [GC[YG occupancy: 59235 K (118016 K)]981.727: [Rescan
> (parallel) , 0.0066570 secs]981.734: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 72085K(139444K), 0.0067620 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 981.734: [CMS-concurrent-sweep-start]
> 981.736: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 981.736: [CMS-concurrent-reset-start]
> 981.745: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 983.745: [GC [1 CMS-initial-mark: 12849K(21428K)] 72213K(139444K),
> 0.0081400 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 983.753: [CMS-concurrent-mark-start]
> 983.769: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 983.769: [CMS-concurrent-preclean-start]
> 983.769: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 983.769: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 988.840:
> [CMS-concurrent-abortable-preclean: 0.716/5.071 secs] [Times:
> user=0.71 sys=0.00, real=5.07 secs]
> 988.840: [GC[YG occupancy: 59683 K (118016 K)]988.840: [Rescan
> (parallel) , 0.0076020 secs]988.848: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 72533K(139444K), 0.0077100 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 988.848: [CMS-concurrent-sweep-start]
> 988.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 988.850: [CMS-concurrent-reset-start]
> 988.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 990.858: [GC [1 CMS-initial-mark: 12849K(21428K)] 72661K(139444K),
> 0.0081810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 990.867: [CMS-concurrent-mark-start]
> 990.884: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 990.884: [CMS-concurrent-preclean-start]
> 990.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 990.885: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 995.999:
> [CMS-concurrent-abortable-preclean: 0.721/5.114 secs] [Times:
> user=0.73 sys=0.00, real=5.11 secs]
> 995.999: [GC[YG occupancy: 60307 K (118016 K)]995.999: [Rescan
> (parallel) , 0.0058190 secs]996.005: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 73156K(139444K), 0.0059260 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 996.005: [CMS-concurrent-sweep-start]
> 996.007: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 996.007: [CMS-concurrent-reset-start]
> 996.016: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 998.016: [GC [1 CMS-initial-mark: 12849K(21428K)] 73285K(139444K),
> 0.0052760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 998.022: [CMS-concurrent-mark-start]
> 998.038: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 998.038: [CMS-concurrent-preclean-start]
> 998.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 998.039: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1003.048:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1003.048: [GC[YG occupancy: 60755 K (118016 K)]1003.048: [Rescan
> (parallel) , 0.0068040 secs]1003.055: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 73605K(139444K), 0.0069060 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 1003.055: [CMS-concurrent-sweep-start]
> 1003.057: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1003.057: [CMS-concurrent-reset-start]
> 1003.066: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1005.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 73733K(139444K),
> 0.0082200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1005.075: [CMS-concurrent-mark-start]
> 1005.090: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1005.090: [CMS-concurrent-preclean-start]
> 1005.090: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1005.090: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1010.094:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1010.094: [GC[YG occupancy: 61203 K (118016 K)]1010.094: [Rescan
> (parallel) , 0.0066010 secs]1010.101: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 74053K(139444K), 0.0067120 secs]
> [Times: user=0.08 sys=0.00, real=0.00 secs]
> 1010.101: [CMS-concurrent-sweep-start]
> 1010.103: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1010.103: [CMS-concurrent-reset-start]
> 1010.112: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1012.113: [GC [1 CMS-initial-mark: 12849K(21428K)] 74181K(139444K),
> 0.0083460 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1012.121: [CMS-concurrent-mark-start]
> 1012.137: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1012.137: [CMS-concurrent-preclean-start]
> 1012.138: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1012.138: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1017.144:
> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1017.144: [GC[YG occupancy: 61651 K (118016 K)]1017.144: [Rescan
> (parallel) , 0.0058810 secs]1017.150: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 74501K(139444K), 0.0059830 secs]
> [Times: user=0.06 sys=0.00, real=0.00 secs]
> 1017.151: [CMS-concurrent-sweep-start]
> 1017.153: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1017.153: [CMS-concurrent-reset-start]
> 1017.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1019.162: [GC [1 CMS-initial-mark: 12849K(21428K)] 74629K(139444K),
> 0.0083310 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1019.171: [CMS-concurrent-mark-start]
> 1019.187: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1019.187: [CMS-concurrent-preclean-start]
> 1019.187: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1019.187: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1024.261:
> [CMS-concurrent-abortable-preclean: 0.717/5.074 secs] [Times:
> user=0.72 sys=0.00, real=5.07 secs]
> 1024.261: [GC[YG occupancy: 62351 K (118016 K)]1024.262: [Rescan
> (parallel) , 0.0069720 secs]1024.269: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 75200K(139444K), 0.0070750 secs]
> [Times: user=0.08 sys=0.01, real=0.01 secs]
> 1024.269: [CMS-concurrent-sweep-start]
> 1024.270: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1024.270: [CMS-concurrent-reset-start]
> 1024.278: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1026.279: [GC [1 CMS-initial-mark: 12849K(21428K)] 75329K(139444K),
> 0.0086360 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1026.288: [CMS-concurrent-mark-start]
> 1026.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1026.305: [CMS-concurrent-preclean-start]
> 1026.305: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1026.305: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1031.308:
> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1031.308: [GC[YG occupancy: 62799 K (118016 K)]1031.308: [Rescan
> (parallel) , 0.0069330 secs]1031.315: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 75649K(139444K), 0.0070380 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1031.315: [CMS-concurrent-sweep-start]
> 1031.316: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1031.316: [CMS-concurrent-reset-start]
> 1031.326: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1033.326: [GC [1 CMS-initial-mark: 12849K(21428K)] 75777K(139444K),
> 0.0085850 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1033.335: [CMS-concurrent-mark-start]
> 1033.350: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1033.350: [CMS-concurrent-preclean-start]
> 1033.351: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1033.351: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1038.357:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.69 sys=0.01, real=5.01 secs]
> 1038.358: [GC[YG occupancy: 63247 K (118016 K)]1038.358: [Rescan
> (parallel) , 0.0071860 secs]1038.365: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 76097K(139444K), 0.0072900 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 1038.365: [CMS-concurrent-sweep-start]
> 1038.367: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1038.367: [CMS-concurrent-reset-start]
> 1038.376: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1040.376: [GC [1 CMS-initial-mark: 12849K(21428K)] 76225K(139444K),
> 0.0085910 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1040.385: [CMS-concurrent-mark-start]
> 1040.401: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1040.401: [CMS-concurrent-preclean-start]
> 1040.401: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1040.401: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1045.411:
> [CMS-concurrent-abortable-preclean: 0.705/5.010 secs] [Times:
> user=0.69 sys=0.01, real=5.01 secs]
> 1045.412: [GC[YG occupancy: 63695 K (118016 K)]1045.412: [Rescan
> (parallel) , 0.0082050 secs]1045.420: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 76545K(139444K), 0.0083110 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1045.420: [CMS-concurrent-sweep-start]
> 1045.421: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1045.421: [CMS-concurrent-reset-start]
> 1045.430: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1047.430: [GC [1 CMS-initial-mark: 12849K(21428K)] 76673K(139444K),
> 0.0086110 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1047.439: [CMS-concurrent-mark-start]
> 1047.456: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1047.456: [CMS-concurrent-preclean-start]
> 1047.456: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1047.456: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1052.462:
> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1052.462: [GC[YG occupancy: 64144 K (118016 K)]1052.462: [Rescan
> (parallel) , 0.0087770 secs]1052.471: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 76994K(139444K), 0.0088770 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1052.471: [CMS-concurrent-sweep-start]
> 1052.472: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1052.472: [CMS-concurrent-reset-start]
> 1052.481: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1052.628: [GC [1 CMS-initial-mark: 12849K(21428K)] 77058K(139444K),
> 0.0086170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1052.637: [CMS-concurrent-mark-start]
> 1052.655: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1052.655: [CMS-concurrent-preclean-start]
> 1052.656: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1052.656: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1057.658:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1057.658: [GC[YG occupancy: 64569 K (118016 K)]1057.658: [Rescan
> (parallel) , 0.0072850 secs]1057.665: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 77418K(139444K), 0.0073880 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1057.666: [CMS-concurrent-sweep-start]
> 1057.668: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1057.668: [CMS-concurrent-reset-start]
> 1057.677: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1059.677: [GC [1 CMS-initial-mark: 12849K(21428K)] 77547K(139444K),
> 0.0086820 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1059.686: [CMS-concurrent-mark-start]
> 1059.703: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1059.703: [CMS-concurrent-preclean-start]
> 1059.703: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1059.703: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1064.712:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1064.712: [GC[YG occupancy: 65017 K (118016 K)]1064.712: [Rescan
> (parallel) , 0.0071630 secs]1064.720: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 77867K(139444K), 0.0072700 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1064.720: [CMS-concurrent-sweep-start]
> 1064.722: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1064.722: [CMS-concurrent-reset-start]
> 1064.731: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1066.731: [GC [1 CMS-initial-mark: 12849K(21428K)] 77995K(139444K),
> 0.0087640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1066.740: [CMS-concurrent-mark-start]
> 1066.757: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1066.757: [CMS-concurrent-preclean-start]
> 1066.757: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1066.757: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1071.821:
> [CMS-concurrent-abortable-preclean: 0.714/5.064 secs] [Times:
> user=0.71 sys=0.00, real=5.06 secs]
> 1071.822: [GC[YG occupancy: 65465 K (118016 K)]1071.822: [Rescan
> (parallel) , 0.0056280 secs]1071.827: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 78315K(139444K), 0.0057430 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 1071.828: [CMS-concurrent-sweep-start]
> 1071.830: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1071.830: [CMS-concurrent-reset-start]
> 1071.839: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1073.839: [GC [1 CMS-initial-mark: 12849K(21428K)] 78443K(139444K),
> 0.0087570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1073.848: [CMS-concurrent-mark-start]
> 1073.865: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1073.865: [CMS-concurrent-preclean-start]
> 1073.865: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1073.865: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1078.868:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1078.868: [GC[YG occupancy: 65914 K (118016 K)]1078.868: [Rescan
> (parallel) , 0.0055280 secs]1078.873: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 78763K(139444K), 0.0056320 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 1078.874: [CMS-concurrent-sweep-start]
> 1078.875: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1078.875: [CMS-concurrent-reset-start]
> 1078.884: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1080.884: [GC [1 CMS-initial-mark: 12849K(21428K)] 78892K(139444K),
> 0.0088520 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1080.893: [CMS-concurrent-mark-start]
> 1080.908: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1080.909: [CMS-concurrent-preclean-start]
> 1080.909: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1080.909: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1086.021:
> [CMS-concurrent-abortable-preclean: 0.714/5.112 secs] [Times:
> user=0.72 sys=0.00, real=5.11 secs]
> 1086.021: [GC[YG occupancy: 66531 K (118016 K)]1086.022: [Rescan
> (parallel) , 0.0075330 secs]1086.029: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 79381K(139444K), 0.0076440 secs]
> [Times: user=0.09 sys=0.01, real=0.01 secs]
> 1086.029: [CMS-concurrent-sweep-start]
> 1086.031: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1086.031: [CMS-concurrent-reset-start]
> 1086.041: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1088.041: [GC [1 CMS-initial-mark: 12849K(21428K)] 79509K(139444K),
> 0.0091350 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1088.050: [CMS-concurrent-mark-start]
> 1088.066: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1088.067: [CMS-concurrent-preclean-start]
> 1088.067: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1088.067: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1093.070:
> [CMS-concurrent-abortable-preclean: 0.698/5.004 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1093.071: [GC[YG occupancy: 66980 K (118016 K)]1093.071: [Rescan
> (parallel) , 0.0051870 secs]1093.076: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 79830K(139444K), 0.0052930 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 1093.076: [CMS-concurrent-sweep-start]
> 1093.078: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1093.078: [CMS-concurrent-reset-start]
> 1093.087: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1095.088: [GC [1 CMS-initial-mark: 12849K(21428K)] 79958K(139444K),
> 0.0091350 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1095.097: [CMS-concurrent-mark-start]
> 1095.114: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1095.114: [CMS-concurrent-preclean-start]
> 1095.115: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1095.115: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1100.121:
> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 1100.121: [GC[YG occupancy: 67428 K (118016 K)]1100.122: [Rescan
> (parallel) , 0.0068510 secs]1100.128: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 80278K(139444K), 0.0069510 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1100.129: [CMS-concurrent-sweep-start]
> 1100.130: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1100.130: [CMS-concurrent-reset-start]
> 1100.138: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1102.139: [GC [1 CMS-initial-mark: 12849K(21428K)] 80406K(139444K),
> 0.0090760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1102.148: [CMS-concurrent-mark-start]
> 1102.165: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1102.165: [CMS-concurrent-preclean-start]
> 1102.165: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1102.165: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1107.168:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1107.168: [GC[YG occupancy: 67876 K (118016 K)]1107.168: [Rescan
> (parallel) , 0.0076420 secs]1107.176: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 80726K(139444K), 0.0077500 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1107.176: [CMS-concurrent-sweep-start]
> 1107.178: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1107.178: [CMS-concurrent-reset-start]
> 1107.187: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1109.188: [GC [1 CMS-initial-mark: 12849K(21428K)] 80854K(139444K),
> 0.0091510 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1109.197: [CMS-concurrent-mark-start]
> 1109.214: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1109.214: [CMS-concurrent-preclean-start]
> 1109.214: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1109.214: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1114.290:
> [CMS-concurrent-abortable-preclean: 0.711/5.076 secs] [Times:
> user=0.72 sys=0.00, real=5.07 secs]
> 1114.290: [GC[YG occupancy: 68473 K (118016 K)]1114.290: [Rescan
> (parallel) , 0.0084730 secs]1114.299: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 81322K(139444K), 0.0085810 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1114.299: [CMS-concurrent-sweep-start]
> 1114.301: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1114.301: [CMS-concurrent-reset-start]
> 1114.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1115.803: [GC [1 CMS-initial-mark: 12849K(21428K)] 81451K(139444K),
> 0.0106050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1115.814: [CMS-concurrent-mark-start]
> 1115.830: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1115.830: [CMS-concurrent-preclean-start]
> 1115.831: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1115.831: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1120.839:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1120.839: [GC[YG occupancy: 68921 K (118016 K)]1120.839: [Rescan
> (parallel) , 0.0088800 secs]1120.848: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 81771K(139444K), 0.0089910 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1120.848: [CMS-concurrent-sweep-start]
> 1120.850: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1120.850: [CMS-concurrent-reset-start]
> 1120.858: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1122.859: [GC [1 CMS-initial-mark: 12849K(21428K)] 81899K(139444K),
> 0.0092280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1122.868: [CMS-concurrent-mark-start]
> 1122.885: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1122.885: [CMS-concurrent-preclean-start]
> 1122.885: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1122.885: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1127.888:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 1127.888: [GC[YG occupancy: 69369 K (118016 K)]1127.888: [Rescan
> (parallel) , 0.0087740 secs]1127.897: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 82219K(139444K), 0.0088850 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1127.897: [CMS-concurrent-sweep-start]
> 1127.898: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1127.898: [CMS-concurrent-reset-start]
> 1127.906: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1129.907: [GC [1 CMS-initial-mark: 12849K(21428K)] 82347K(139444K),
> 0.0092280 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1129.916: [CMS-concurrent-mark-start]
> 1129.933: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1129.933: [CMS-concurrent-preclean-start]
> 1129.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1129.934: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1134.938:
> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1134.938: [GC[YG occupancy: 69818 K (118016 K)]1134.939: [Rescan
> (parallel) , 0.0078530 secs]1134.946: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 82667K(139444K), 0.0079630 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1134.947: [CMS-concurrent-sweep-start]
> 1134.948: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1134.948: [CMS-concurrent-reset-start]
> 1134.956: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1136.957: [GC [1 CMS-initial-mark: 12849K(21428K)] 82795K(139444K),
> 0.0092760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1136.966: [CMS-concurrent-mark-start]
> 1136.983: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1136.983: [CMS-concurrent-preclean-start]
> 1136.984: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
> 1136.984: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1141.991:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1141.991: [GC[YG occupancy: 70266 K (118016 K)]1141.991: [Rescan
> (parallel) , 0.0090620 secs]1142.000: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 83116K(139444K), 0.0091700 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1142.000: [CMS-concurrent-sweep-start]
> 1142.002: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1142.002: [CMS-concurrent-reset-start]
> 1142.011: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1142.657: [GC [1 CMS-initial-mark: 12849K(21428K)] 83390K(139444K),
> 0.0094330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1142.667: [CMS-concurrent-mark-start]
> 1142.685: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1142.685: [CMS-concurrent-preclean-start]
> 1142.686: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1142.686: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1147.688:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1147.688: [GC[YG occupancy: 70901 K (118016 K)]1147.688: [Rescan
> (parallel) , 0.0081170 secs]1147.696: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 83751K(139444K), 0.0082390 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1147.697: [CMS-concurrent-sweep-start]
> 1147.698: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1147.698: [CMS-concurrent-reset-start]
> 1147.706: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1149.706: [GC [1 CMS-initial-mark: 12849K(21428K)] 83879K(139444K),
> 0.0095560 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1149.716: [CMS-concurrent-mark-start]
> 1149.734: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1149.734: [CMS-concurrent-preclean-start]
> 1149.734: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1149.734: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1154.741:
> [CMS-concurrent-abortable-preclean: 0.701/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1154.741: [GC[YG occupancy: 71349 K (118016 K)]1154.741: [Rescan
> (parallel) , 0.0090720 secs]1154.750: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 84199K(139444K), 0.0091780 secs]
> [Times: user=0.10 sys=0.01, real=0.01 secs]
> 1154.750: [CMS-concurrent-sweep-start]
> 1154.752: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1154.752: [CMS-concurrent-reset-start]
> 1154.762: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1155.021: [GC [1 CMS-initial-mark: 12849K(21428K)] 84199K(139444K),
> 0.0094030 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1155.031: [CMS-concurrent-mark-start]
> 1155.047: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1155.047: [CMS-concurrent-preclean-start]
> 1155.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1155.047: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1160.056:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1160.056: [GC[YG occupancy: 71669 K (118016 K)]1160.056: [Rescan
> (parallel) , 0.0056520 secs]1160.062: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 84519K(139444K), 0.0057790 secs]
> [Times: user=0.07 sys=0.00, real=0.00 secs]
> 1160.062: [CMS-concurrent-sweep-start]
> 1160.064: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1160.064: [CMS-concurrent-reset-start]
> 1160.073: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1162.074: [GC [1 CMS-initial-mark: 12849K(21428K)] 84647K(139444K),
> 0.0095040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1162.083: [CMS-concurrent-mark-start]
> 1162.098: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1162.098: [CMS-concurrent-preclean-start]
> 1162.099: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1162.099: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1167.102:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1167.102: [GC[YG occupancy: 72118 K (118016 K)]1167.102: [Rescan
> (parallel) , 0.0072180 secs]1167.110: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 84968K(139444K), 0.0073300 secs]
> [Times: user=0.08 sys=0.00, real=0.01 secs]
> 1167.110: [CMS-concurrent-sweep-start]
> 1167.112: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1167.112: [CMS-concurrent-reset-start]
> 1167.121: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1169.121: [GC [1 CMS-initial-mark: 12849K(21428K)] 85096K(139444K),
> 0.0096940 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1169.131: [CMS-concurrent-mark-start]
> 1169.147: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1169.147: [CMS-concurrent-preclean-start]
> 1169.147: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1169.147: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1174.197:
> [CMS-concurrent-abortable-preclean: 0.720/5.050 secs] [Times:
> user=0.72 sys=0.01, real=5.05 secs]
> 1174.198: [GC[YG occupancy: 72607 K (118016 K)]1174.198: [Rescan
> (parallel) , 0.0064910 secs]1174.204: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 85456K(139444K), 0.0065940 secs]
> [Times: user=0.06 sys=0.01, real=0.01 secs]
> 1174.204: [CMS-concurrent-sweep-start]
> 1174.206: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1174.206: [CMS-concurrent-reset-start]
> 1174.215: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1176.215: [GC [1 CMS-initial-mark: 12849K(21428K)] 85585K(139444K),
> 0.0095940 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1176.225: [CMS-concurrent-mark-start]
> 1176.240: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1176.240: [CMS-concurrent-preclean-start]
> 1176.241: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1176.241: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1181.244:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1181.244: [GC[YG occupancy: 73055 K (118016 K)]1181.244: [Rescan
> (parallel) , 0.0093030 secs]1181.254: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 85905K(139444K), 0.0094040 secs]
> [Times: user=0.09 sys=0.01, real=0.01 secs]
> 1181.254: [CMS-concurrent-sweep-start]
> 1181.256: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1181.256: [CMS-concurrent-reset-start]
> 1181.265: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1183.266: [GC [1 CMS-initial-mark: 12849K(21428K)] 86033K(139444K),
> 0.0096490 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1183.275: [CMS-concurrent-mark-start]
> 1183.293: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
> sys=0.00, real=0.02 secs]
> 1183.293: [CMS-concurrent-preclean-start]
> 1183.294: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1183.294: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1188.301:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1188.301: [GC[YG occupancy: 73503 K (118016 K)]1188.301: [Rescan
> (parallel) , 0.0092610 secs]1188.310: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 86353K(139444K), 0.0093750 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1188.310: [CMS-concurrent-sweep-start]
> 1188.312: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1188.312: [CMS-concurrent-reset-start]
> 1188.320: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1190.321: [GC [1 CMS-initial-mark: 12849K(21428K)] 86481K(139444K),
> 0.0097510 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1190.331: [CMS-concurrent-mark-start]
> 1190.347: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1190.347: [CMS-concurrent-preclean-start]
> 1190.347: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1190.347: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1195.359:
> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1195.359: [GC[YG occupancy: 73952 K (118016 K)]1195.359: [Rescan
> (parallel) , 0.0093210 secs]1195.368: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 86801K(139444K), 0.0094330 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1195.369: [CMS-concurrent-sweep-start]
> 1195.370: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1195.370: [CMS-concurrent-reset-start]
> 1195.378: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1196.543: [GC [1 CMS-initial-mark: 12849K(21428K)] 88001K(139444K),
> 0.0099870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1196.553: [CMS-concurrent-mark-start]
> 1196.570: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1196.570: [CMS-concurrent-preclean-start]
> 1196.570: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1196.570: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1201.574:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1201.574: [GC[YG occupancy: 75472 K (118016 K)]1201.574: [Rescan
> (parallel) , 0.0096480 secs]1201.584: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 88322K(139444K), 0.0097500 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1201.584: [CMS-concurrent-sweep-start]
> 1201.586: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1201.586: [CMS-concurrent-reset-start]
> 1201.595: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1202.679: [GC [1 CMS-initial-mark: 12849K(21428K)] 88491K(139444K),
> 0.0099400 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1202.690: [CMS-concurrent-mark-start]
> 1202.708: [CMS-concurrent-mark: 0.016/0.019 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1202.708: [CMS-concurrent-preclean-start]
> 1202.709: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1202.709: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1207.718:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1207.718: [GC[YG occupancy: 76109 K (118016 K)]1207.718: [Rescan
> (parallel) , 0.0096360 secs]1207.727: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 88959K(139444K), 0.0097380 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1207.728: [CMS-concurrent-sweep-start]
> 1207.729: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1207.729: [CMS-concurrent-reset-start]
> 1207.737: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1209.738: [GC [1 CMS-initial-mark: 12849K(21428K)] 89087K(139444K),
> 0.0099440 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1209.748: [CMS-concurrent-mark-start]
> 1209.765: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1209.765: [CMS-concurrent-preclean-start]
> 1209.765: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1209.765: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1214.797:
> [CMS-concurrent-abortable-preclean: 0.716/5.031 secs] [Times:
> user=0.72 sys=0.00, real=5.03 secs]
> 1214.797: [GC[YG occupancy: 76557 K (118016 K)]1214.797: [Rescan
> (parallel) , 0.0096280 secs]1214.807: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 89407K(139444K), 0.0097320 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1214.807: [CMS-concurrent-sweep-start]
> 1214.808: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1214.808: [CMS-concurrent-reset-start]
> 1214.816: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1216.817: [GC [1 CMS-initial-mark: 12849K(21428K)] 89535K(139444K),
> 0.0099640 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1216.827: [CMS-concurrent-mark-start]
> 1216.844: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1216.844: [CMS-concurrent-preclean-start]
> 1216.844: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1216.844: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1221.847:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1221.847: [GC[YG occupancy: 77005 K (118016 K)]1221.847: [Rescan
> (parallel) , 0.0061810 secs]1221.854: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 89855K(139444K), 0.0062950 secs]
> [Times: user=0.07 sys=0.00, real=0.01 secs]
> 1221.854: [CMS-concurrent-sweep-start]
> 1221.855: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1221.855: [CMS-concurrent-reset-start]
> 1221.864: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1223.865: [GC [1 CMS-initial-mark: 12849K(21428K)] 89983K(139444K),
> 0.0100430 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1223.875: [CMS-concurrent-mark-start]
> 1223.890: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1223.890: [CMS-concurrent-preclean-start]
> 1223.891: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1223.891: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1228.899:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1228.899: [GC[YG occupancy: 77454 K (118016 K)]1228.899: [Rescan
> (parallel) , 0.0095850 secs]1228.909: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 90304K(139444K), 0.0096960 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1228.909: [CMS-concurrent-sweep-start]
> 1228.911: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1228.911: [CMS-concurrent-reset-start]
> 1228.919: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1230.919: [GC [1 CMS-initial-mark: 12849K(21428K)] 90432K(139444K),
> 0.0101360 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1230.930: [CMS-concurrent-mark-start]
> 1230.946: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1230.946: [CMS-concurrent-preclean-start]
> 1230.947: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1230.947: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1235.952:
> [CMS-concurrent-abortable-preclean: 0.699/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1235.952: [GC[YG occupancy: 77943 K (118016 K)]1235.952: [Rescan
> (parallel) , 0.0084420 secs]1235.961: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 90793K(139444K), 0.0085450 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1235.961: [CMS-concurrent-sweep-start]
> 1235.963: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1235.963: [CMS-concurrent-reset-start]
> 1235.972: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1237.973: [GC [1 CMS-initial-mark: 12849K(21428K)] 90921K(139444K),
> 0.0101280 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1237.983: [CMS-concurrent-mark-start]
> 1237.998: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1237.998: [CMS-concurrent-preclean-start]
> 1237.999: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1237.999: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1243.008:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1243.008: [GC[YG occupancy: 78391 K (118016 K)]1243.008: [Rescan
> (parallel) , 0.0090510 secs]1243.017: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 91241K(139444K), 0.0091560 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1243.017: [CMS-concurrent-sweep-start]
> 1243.019: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1243.019: [CMS-concurrent-reset-start]
> 1243.027: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1245.027: [GC [1 CMS-initial-mark: 12849K(21428K)] 91369K(139444K),
> 0.0101080 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1245.038: [CMS-concurrent-mark-start]
> 1245.055: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1245.055: [CMS-concurrent-preclean-start]
> 1245.055: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1245.055: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1250.058:
> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1250.058: [GC[YG occupancy: 78839 K (118016 K)]1250.058: [Rescan
> (parallel) , 0.0096920 secs]1250.068: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 91689K(139444K), 0.0098040 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1250.068: [CMS-concurrent-sweep-start]
> 1250.070: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1250.070: [CMS-concurrent-reset-start]
> 1250.078: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1252.078: [GC [1 CMS-initial-mark: 12849K(21428K)] 91817K(139444K),
> 0.0102560 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1252.089: [CMS-concurrent-mark-start]
> 1252.105: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1252.105: [CMS-concurrent-preclean-start]
> 1252.106: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1252.106: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1257.113:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1257.113: [GC[YG occupancy: 79288 K (118016 K)]1257.113: [Rescan
> (parallel) , 0.0089920 secs]1257.122: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 92137K(139444K), 0.0090960 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1257.122: [CMS-concurrent-sweep-start]
> 1257.124: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1257.124: [CMS-concurrent-reset-start]
> 1257.133: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1259.134: [GC [1 CMS-initial-mark: 12849K(21428K)] 92266K(139444K),
> 0.0101720 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1259.144: [CMS-concurrent-mark-start]
> 1259.159: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 1259.159: [CMS-concurrent-preclean-start]
> 1259.159: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1259.159: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1264.229:
> [CMS-concurrent-abortable-preclean: 0.716/5.070 secs] [Times:
> user=0.72 sys=0.01, real=5.07 secs]
> 1264.229: [GC[YG occupancy: 79881 K (118016 K)]1264.229: [Rescan
> (parallel) , 0.0101320 secs]1264.240: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 92731K(139444K), 0.0102440 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1264.240: [CMS-concurrent-sweep-start]
> 1264.241: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1264.241: [CMS-concurrent-reset-start]
> 1264.250: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1266.250: [GC [1 CMS-initial-mark: 12849K(21428K)] 92859K(139444K),
> 0.0105180 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1266.261: [CMS-concurrent-mark-start]
> 1266.277: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1266.277: [CMS-concurrent-preclean-start]
> 1266.277: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1266.277: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1271.285:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1271.285: [GC[YG occupancy: 80330 K (118016 K)]1271.285: [Rescan
> (parallel) , 0.0094600 secs]1271.295: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 93180K(139444K), 0.0095600 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1271.295: [CMS-concurrent-sweep-start]
> 1271.297: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1271.297: [CMS-concurrent-reset-start]
> 1271.306: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1273.306: [GC [1 CMS-initial-mark: 12849K(21428K)] 93308K(139444K),
> 0.0104100 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1273.317: [CMS-concurrent-mark-start]
> 1273.334: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1273.334: [CMS-concurrent-preclean-start]
> 1273.335: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1273.335: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1278.341:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1278.341: [GC[YG occupancy: 80778 K (118016 K)]1278.341: [Rescan
> (parallel) , 0.0101320 secs]1278.351: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 93628K(139444K), 0.0102460 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1278.351: [CMS-concurrent-sweep-start]
> 1278.353: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1278.353: [CMS-concurrent-reset-start]
> 1278.362: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1280.362: [GC [1 CMS-initial-mark: 12849K(21428K)] 93756K(139444K),
> 0.0105680 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1280.373: [CMS-concurrent-mark-start]
> 1280.388: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1280.388: [CMS-concurrent-preclean-start]
> 1280.388: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1280.388: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1285.400:
> [CMS-concurrent-abortable-preclean: 0.706/5.012 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1285.400: [GC[YG occupancy: 81262 K (118016 K)]1285.400: [Rescan
> (parallel) , 0.0093660 secs]1285.410: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 94111K(139444K), 0.0094820 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1285.410: [CMS-concurrent-sweep-start]
> 1285.411: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1285.411: [CMS-concurrent-reset-start]
> 1285.420: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1287.420: [GC [1 CMS-initial-mark: 12849K(21428K)] 94240K(139444K),
> 0.0105800 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1287.431: [CMS-concurrent-mark-start]
> 1287.447: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1287.447: [CMS-concurrent-preclean-start]
> 1287.447: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1287.447: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1292.460:
> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1292.460: [GC[YG occupancy: 81710 K (118016 K)]1292.460: [Rescan
> (parallel) , 0.0081130 secs]1292.468: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 94560K(139444K), 0.0082210 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1292.468: [CMS-concurrent-sweep-start]
> 1292.470: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1292.470: [CMS-concurrent-reset-start]
> 1292.480: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1292.712: [GC [1 CMS-initial-mark: 12849K(21428K)] 94624K(139444K),
> 0.0104870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1292.723: [CMS-concurrent-mark-start]
> 1292.739: [CMS-concurrent-mark: 0.015/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1292.739: [CMS-concurrent-preclean-start]
> 1292.740: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1292.740: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1297.748:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1297.748: [GC[YG occupancy: 82135 K (118016 K)]1297.748: [Rescan
> (parallel) , 0.0106180 secs]1297.759: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 94985K(139444K), 0.0107410 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1297.759: [CMS-concurrent-sweep-start]
> 1297.760: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1297.761: [CMS-concurrent-reset-start]
> 1297.769: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1299.769: [GC [1 CMS-initial-mark: 12849K(21428K)] 95113K(139444K),
> 0.0105340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1299.780: [CMS-concurrent-mark-start]
> 1299.796: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1299.796: [CMS-concurrent-preclean-start]
> 1299.797: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1299.797: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1304.805:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.69 sys=0.00, real=5.01 secs]
> 1304.805: [GC[YG occupancy: 82583 K (118016 K)]1304.806: [Rescan
> (parallel) , 0.0094010 secs]1304.815: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 95433K(139444K), 0.0095140 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1304.815: [CMS-concurrent-sweep-start]
> 1304.817: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1304.817: [CMS-concurrent-reset-start]
> 1304.827: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1306.827: [GC [1 CMS-initial-mark: 12849K(21428K)] 95561K(139444K),
> 0.0107300 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1306.838: [CMS-concurrent-mark-start]
> 1306.855: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1306.855: [CMS-concurrent-preclean-start]
> 1306.855: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1306.855: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1311.858:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1311.858: [GC[YG occupancy: 83032 K (118016 K)]1311.858: [Rescan
> (parallel) , 0.0094210 secs]1311.867: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 95882K(139444K), 0.0095360 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1311.868: [CMS-concurrent-sweep-start]
> 1311.869: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1311.869: [CMS-concurrent-reset-start]
> 1311.877: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1313.878: [GC [1 CMS-initial-mark: 12849K(21428K)] 96010K(139444K),
> 0.0107870 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1313.889: [CMS-concurrent-mark-start]
> 1313.905: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1313.905: [CMS-concurrent-preclean-start]
> 1313.906: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1313.906: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1318.914:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1318.915: [GC[YG occupancy: 83481 K (118016 K)]1318.915: [Rescan
> (parallel) , 0.0096280 secs]1318.924: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 96331K(139444K), 0.0097340 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1318.925: [CMS-concurrent-sweep-start]
> 1318.927: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1318.927: [CMS-concurrent-reset-start]
> 1318.936: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1320.936: [GC [1 CMS-initial-mark: 12849K(21428K)] 96459K(139444K),
> 0.0106300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1320.947: [CMS-concurrent-mark-start]
> 1320.964: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1320.964: [CMS-concurrent-preclean-start]
> 1320.965: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1320.965: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1325.991:
> [CMS-concurrent-abortable-preclean: 0.717/5.026 secs] [Times:
> user=0.73 sys=0.00, real=5.02 secs]
> 1325.991: [GC[YG occupancy: 84205 K (118016 K)]1325.991: [Rescan
> (parallel) , 0.0097880 secs]1326.001: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 97055K(139444K), 0.0099010 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1326.001: [CMS-concurrent-sweep-start]
> 1326.003: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1326.003: [CMS-concurrent-reset-start]
> 1326.012: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1328.013: [GC [1 CMS-initial-mark: 12849K(21428K)] 97183K(139444K),
> 0.0109730 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1328.024: [CMS-concurrent-mark-start]
> 1328.039: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1328.039: [CMS-concurrent-preclean-start]
> 1328.039: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1328.039: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1333.043:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1333.043: [GC[YG occupancy: 84654 K (118016 K)]1333.043: [Rescan
> (parallel) , 0.0110740 secs]1333.054: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 97504K(139444K), 0.0111760 secs]
> [Times: user=0.12 sys=0.01, real=0.02 secs]
> 1333.054: [CMS-concurrent-sweep-start]
> 1333.056: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1333.056: [CMS-concurrent-reset-start]
> 1333.065: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1335.066: [GC [1 CMS-initial-mark: 12849K(21428K)] 97632K(139444K),
> 0.0109300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1335.077: [CMS-concurrent-mark-start]
> 1335.094: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1335.094: [CMS-concurrent-preclean-start]
> 1335.094: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1335.094: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1340.103:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1340.103: [GC[YG occupancy: 85203 K (118016 K)]1340.103: [Rescan
> (parallel) , 0.0109470 secs]1340.114: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 98052K(139444K), 0.0110500 secs]
> [Times: user=0.11 sys=0.01, real=0.02 secs]
> 1340.114: [CMS-concurrent-sweep-start]
> 1340.116: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1340.116: [CMS-concurrent-reset-start]
> 1340.125: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1342.126: [GC [1 CMS-initial-mark: 12849K(21428K)] 98181K(139444K),
> 0.0109170 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1342.137: [CMS-concurrent-mark-start]
> 1342.154: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1342.154: [CMS-concurrent-preclean-start]
> 1342.154: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1342.154: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1347.161:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1347.162: [GC[YG occupancy: 85652 K (118016 K)]1347.162: [Rescan
> (parallel) , 0.0075610 secs]1347.169: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 98502K(139444K), 0.0076680 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1347.169: [CMS-concurrent-sweep-start]
> 1347.171: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1347.172: [CMS-concurrent-reset-start]
> 1347.181: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1349.181: [GC [1 CMS-initial-mark: 12849K(21428K)] 98630K(139444K),
> 0.0109540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1349.192: [CMS-concurrent-mark-start]
> 1349.208: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1349.208: [CMS-concurrent-preclean-start]
> 1349.208: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1349.208: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1354.268:
> [CMS-concurrent-abortable-preclean: 0.723/5.060 secs] [Times:
> user=0.73 sys=0.00, real=5.06 secs]
> 1354.268: [GC[YG occupancy: 86241 K (118016 K)]1354.268: [Rescan
> (parallel) , 0.0099530 secs]1354.278: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 99091K(139444K), 0.0100670 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1354.278: [CMS-concurrent-sweep-start]
> 1354.280: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1354.280: [CMS-concurrent-reset-start]
> 1354.288: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1356.289: [GC [1 CMS-initial-mark: 12849K(21428K)] 99219K(139444K),
> 0.0111450 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1356.300: [CMS-concurrent-mark-start]
> 1356.316: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1356.316: [CMS-concurrent-preclean-start]
> 1356.317: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1356.317: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1361.322:
> [CMS-concurrent-abortable-preclean: 0.700/5.005 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1361.322: [GC[YG occupancy: 86690 K (118016 K)]1361.322: [Rescan
> (parallel) , 0.0097180 secs]1361.332: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 99540K(139444K), 0.0098210 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1361.332: [CMS-concurrent-sweep-start]
> 1361.333: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1361.333: [CMS-concurrent-reset-start]
> 1361.342: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1363.342: [GC [1 CMS-initial-mark: 12849K(21428K)] 99668K(139444K),
> 0.0110230 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1363.354: [CMS-concurrent-mark-start]
> 1363.368: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1363.368: [CMS-concurrent-preclean-start]
> 1363.369: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1363.369: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1368.378:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1368.378: [GC[YG occupancy: 87139 K (118016 K)]1368.378: [Rescan
> (parallel) , 0.0100770 secs]1368.388: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 99989K(139444K), 0.0101900 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1368.388: [CMS-concurrent-sweep-start]
> 1368.390: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1368.390: [CMS-concurrent-reset-start]
> 1368.398: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1370.399: [GC [1 CMS-initial-mark: 12849K(21428K)] 100117K(139444K),
> 0.0111810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1370.410: [CMS-concurrent-mark-start]
> 1370.426: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1370.426: [CMS-concurrent-preclean-start]
> 1370.427: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1370.427: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1375.447:
> [CMS-concurrent-abortable-preclean: 0.715/5.020 secs] [Times:
> user=0.72 sys=0.00, real=5.02 secs]
> 1375.447: [GC[YG occupancy: 87588 K (118016 K)]1375.447: [Rescan
> (parallel) , 0.0101690 secs]1375.457: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 100438K(139444K), 0.0102730 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1375.457: [CMS-concurrent-sweep-start]
> 1375.459: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1375.459: [CMS-concurrent-reset-start]
> 1375.467: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1377.467: [GC [1 CMS-initial-mark: 12849K(21428K)] 100566K(139444K),
> 0.0110760 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1377.478: [CMS-concurrent-mark-start]
> 1377.495: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1377.495: [CMS-concurrent-preclean-start]
> 1377.496: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1377.496: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1382.502:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 1382.502: [GC[YG occupancy: 89213 K (118016 K)]1382.502: [Rescan
> (parallel) , 0.0108630 secs]1382.513: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 102063K(139444K), 0.0109700 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1382.513: [CMS-concurrent-sweep-start]
> 1382.514: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1382.514: [CMS-concurrent-reset-start]
> 1382.523: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1382.743: [GC [1 CMS-initial-mark: 12849K(21428K)] 102127K(139444K),
> 0.0113140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1382.755: [CMS-concurrent-mark-start]
> 1382.773: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1382.773: [CMS-concurrent-preclean-start]
> 1382.774: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1382.774: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1387.777:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1387.777: [GC[YG occupancy: 89638 K (118016 K)]1387.777: [Rescan
> (parallel) , 0.0113310 secs]1387.789: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 102488K(139444K), 0.0114430 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1387.789: [CMS-concurrent-sweep-start]
> 1387.790: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1387.790: [CMS-concurrent-reset-start]
> 1387.799: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1389.799: [GC [1 CMS-initial-mark: 12849K(21428K)] 102617K(139444K),
> 0.0113540 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1389.810: [CMS-concurrent-mark-start]
> 1389.827: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1389.827: [CMS-concurrent-preclean-start]
> 1389.827: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1389.827: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1394.831:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1394.831: [GC[YG occupancy: 90088 K (118016 K)]1394.831: [Rescan
> (parallel) , 0.0103790 secs]1394.841: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 102938K(139444K), 0.0104960 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1394.842: [CMS-concurrent-sweep-start]
> 1394.844: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1394.844: [CMS-concurrent-reset-start]
> 1394.853: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1396.853: [GC [1 CMS-initial-mark: 12849K(21428K)] 103066K(139444K),
> 0.0114740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1396.865: [CMS-concurrent-mark-start]
> 1396.880: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1396.880: [CMS-concurrent-preclean-start]
> 1396.881: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1396.881: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1401.890:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1401.890: [GC[YG occupancy: 90537 K (118016 K)]1401.891: [Rescan
> (parallel) , 0.0116110 secs]1401.902: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 103387K(139444K), 0.0117240 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1401.902: [CMS-concurrent-sweep-start]
> 1401.904: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1401.904: [CMS-concurrent-reset-start]
> 1401.914: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1403.914: [GC [1 CMS-initial-mark: 12849K(21428K)] 103515K(139444K),
> 0.0111980 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1403.925: [CMS-concurrent-mark-start]
> 1403.943: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1403.943: [CMS-concurrent-preclean-start]
> 1403.944: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
> 1403.944: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1408.982:
> [CMS-concurrent-abortable-preclean: 0.718/5.038 secs] [Times:
> user=0.72 sys=0.00, real=5.03 secs]
> 1408.982: [GC[YG occupancy: 90986 K (118016 K)]1408.982: [Rescan
> (parallel) , 0.0115260 secs]1408.994: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 103836K(139444K), 0.0116320 secs]
> [Times: user=0.13 sys=0.00, real=0.02 secs]
> 1408.994: [CMS-concurrent-sweep-start]
> 1408.996: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1408.996: [CMS-concurrent-reset-start]
> 1409.005: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1411.005: [GC [1 CMS-initial-mark: 12849K(21428K)] 103964K(139444K),
> 0.0114590 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1411.017: [CMS-concurrent-mark-start]
> 1411.034: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1411.034: [CMS-concurrent-preclean-start]
> 1411.034: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1411.034: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1416.140:
> [CMS-concurrent-abortable-preclean: 0.712/5.105 secs] [Times:
> user=0.71 sys=0.00, real=5.10 secs]
> 1416.140: [GC[YG occupancy: 91476 K (118016 K)]1416.140: [Rescan
> (parallel) , 0.0114950 secs]1416.152: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 104326K(139444K), 0.0116020 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1416.152: [CMS-concurrent-sweep-start]
> 1416.154: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1416.154: [CMS-concurrent-reset-start]
> 1416.163: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1418.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 104454K(139444K),
> 0.0114040 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1418.175: [CMS-concurrent-mark-start]
> 1418.191: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1418.191: [CMS-concurrent-preclean-start]
> 1418.191: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1418.191: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1423.198:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1423.199: [GC[YG occupancy: 91925 K (118016 K)]1423.199: [Rescan
> (parallel) , 0.0105460 secs]1423.209: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 104775K(139444K), 0.0106640 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1423.209: [CMS-concurrent-sweep-start]
> 1423.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1423.211: [CMS-concurrent-reset-start]
> 1423.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1425.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 104903K(139444K),
> 0.0116300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1425.232: [CMS-concurrent-mark-start]
> 1425.248: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1425.248: [CMS-concurrent-preclean-start]
> 1425.248: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1425.248: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1430.252:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1430.252: [GC[YG occupancy: 92374 K (118016 K)]1430.252: [Rescan
> (parallel) , 0.0098720 secs]1430.262: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 105224K(139444K), 0.0099750 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1430.262: [CMS-concurrent-sweep-start]
> 1430.264: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1430.264: [CMS-concurrent-reset-start]
> 1430.273: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1432.274: [GC [1 CMS-initial-mark: 12849K(21428K)] 105352K(139444K),
> 0.0114050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1432.285: [CMS-concurrent-mark-start]
> 1432.301: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1432.301: [CMS-concurrent-preclean-start]
> 1432.301: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1432.301: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1437.304:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1437.305: [GC[YG occupancy: 92823 K (118016 K)]1437.305: [Rescan
> (parallel) , 0.0115010 secs]1437.316: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 105673K(139444K), 0.0116090 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1437.316: [CMS-concurrent-sweep-start]
> 1437.319: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1437.319: [CMS-concurrent-reset-start]
> 1437.328: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1439.328: [GC [1 CMS-initial-mark: 12849K(21428K)] 105801K(139444K),
> 0.0115740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1439.340: [CMS-concurrent-mark-start]
> 1439.356: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1439.356: [CMS-concurrent-preclean-start]
> 1439.356: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1439.356: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1444.411:
> [CMS-concurrent-abortable-preclean: 0.715/5.054 secs] [Times:
> user=0.72 sys=0.00, real=5.05 secs]
> 1444.411: [GC[YG occupancy: 93547 K (118016 K)]1444.411: [Rescan
> (parallel) , 0.0072910 secs]1444.418: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 106397K(139444K), 0.0073970 secs]
> [Times: user=0.09 sys=0.00, real=0.01 secs]
> 1444.419: [CMS-concurrent-sweep-start]
> 1444.420: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1444.420: [CMS-concurrent-reset-start]
> 1444.429: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1446.429: [GC [1 CMS-initial-mark: 12849K(21428K)] 106525K(139444K),
> 0.0117950 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1446.441: [CMS-concurrent-mark-start]
> 1446.457: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1446.457: [CMS-concurrent-preclean-start]
> 1446.458: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1446.458: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1451.461:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1451.461: [GC[YG occupancy: 93996 K (118016 K)]1451.461: [Rescan
> (parallel) , 0.0120870 secs]1451.473: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 106846K(139444K), 0.0121920 secs]
> [Times: user=0.14 sys=0.00, real=0.02 secs]
> 1451.473: [CMS-concurrent-sweep-start]
> 1451.476: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1451.476: [CMS-concurrent-reset-start]
> 1451.485: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1453.485: [GC [1 CMS-initial-mark: 12849K(21428K)] 106974K(139444K),
> 0.0117990 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1453.497: [CMS-concurrent-mark-start]
> 1453.514: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1453.514: [CMS-concurrent-preclean-start]
> 1453.515: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1453.515: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1458.518:
> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1458.518: [GC[YG occupancy: 94445 K (118016 K)]1458.518: [Rescan
> (parallel) , 0.0123720 secs]1458.530: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 107295K(139444K), 0.0124750 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1458.530: [CMS-concurrent-sweep-start]
> 1458.532: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1458.532: [CMS-concurrent-reset-start]
> 1458.540: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1460.541: [GC [1 CMS-initial-mark: 12849K(21428K)] 107423K(139444K),
> 0.0118680 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1460.553: [CMS-concurrent-mark-start]
> 1460.568: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1460.568: [CMS-concurrent-preclean-start]
> 1460.569: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1460.569: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1465.577:
> [CMS-concurrent-abortable-preclean: 0.703/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1465.577: [GC[YG occupancy: 94894 K (118016 K)]1465.577: [Rescan
> (parallel) , 0.0119100 secs]1465.589: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 107744K(139444K), 0.0120270 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1465.590: [CMS-concurrent-sweep-start]
> 1465.591: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1465.591: [CMS-concurrent-reset-start]
> 1465.600: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1467.600: [GC [1 CMS-initial-mark: 12849K(21428K)] 107937K(139444K),
> 0.0120020 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1467.612: [CMS-concurrent-mark-start]
> 1467.628: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1467.628: [CMS-concurrent-preclean-start]
> 1467.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1467.628: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1472.636:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1472.637: [GC[YG occupancy: 95408 K (118016 K)]1472.637: [Rescan
> (parallel) , 0.0119090 secs]1472.649: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 108257K(139444K), 0.0120260 secs]
> [Times: user=0.13 sys=0.00, real=0.01 secs]
> 1472.649: [CMS-concurrent-sweep-start]
> 1472.650: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1472.650: [CMS-concurrent-reset-start]
> 1472.659: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1472.775: [GC [1 CMS-initial-mark: 12849K(21428K)] 108365K(139444K),
> 0.0120260 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1472.787: [CMS-concurrent-mark-start]
> 1472.805: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1472.805: [CMS-concurrent-preclean-start]
> 1472.806: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.01 sys=0.00, real=0.00 secs]
> 1472.806: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1477.808:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1477.808: [GC[YG occupancy: 95876 K (118016 K)]1477.808: [Rescan
> (parallel) , 0.0099490 secs]1477.818: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 108726K(139444K), 0.0100580 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1477.818: [CMS-concurrent-sweep-start]
> 1477.820: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1477.820: [CMS-concurrent-reset-start]
> 1477.828: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1479.829: [GC [1 CMS-initial-mark: 12849K(21428K)] 108854K(139444K),
> 0.0119550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1479.841: [CMS-concurrent-mark-start]
> 1479.857: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1479.857: [CMS-concurrent-preclean-start]
> 1479.857: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1479.857: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1484.870:
> [CMS-concurrent-abortable-preclean: 0.707/5.012 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1484.870: [GC[YG occupancy: 96325 K (118016 K)]1484.870: [Rescan
> (parallel) , 0.0122870 secs]1484.882: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 109175K(139444K), 0.0123900 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1484.882: [CMS-concurrent-sweep-start]
> 1484.884: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1484.884: [CMS-concurrent-reset-start]
> 1484.893: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1486.893: [GC [1 CMS-initial-mark: 12849K(21428K)] 109304K(139444K),
> 0.0118470 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1486.905: [CMS-concurrent-mark-start]
> 1486.921: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1486.921: [CMS-concurrent-preclean-start]
> 1486.921: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1486.921: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1491.968:
> [CMS-concurrent-abortable-preclean: 0.720/5.047 secs] [Times:
> user=0.72 sys=0.00, real=5.05 secs]
> 1491.968: [GC[YG occupancy: 96774 K (118016 K)]1491.968: [Rescan
> (parallel) , 0.0122850 secs]1491.981: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 109624K(139444K), 0.0123880 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1491.981: [CMS-concurrent-sweep-start]
> 1491.982: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1491.982: [CMS-concurrent-reset-start]
> 1491.991: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1493.991: [GC [1 CMS-initial-mark: 12849K(21428K)] 109753K(139444K),
> 0.0119790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1494.004: [CMS-concurrent-mark-start]
> 1494.019: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1494.019: [CMS-concurrent-preclean-start]
> 1494.019: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1494.019: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1499.100:
> [CMS-concurrent-abortable-preclean: 0.722/5.080 secs] [Times:
> user=0.72 sys=0.00, real=5.08 secs]
> 1499.100: [GC[YG occupancy: 98295 K (118016 K)]1499.100: [Rescan
> (parallel) , 0.0123180 secs]1499.112: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 111145K(139444K), 0.0124240 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1499.113: [CMS-concurrent-sweep-start]
> 1499.114: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1499.114: [CMS-concurrent-reset-start]
> 1499.123: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1501.123: [GC [1 CMS-initial-mark: 12849K(21428K)] 111274K(139444K),
> 0.0117720 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
> 1501.135: [CMS-concurrent-mark-start]
> 1501.150: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 1501.150: [CMS-concurrent-preclean-start]
> 1501.151: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.01 sys=0.00, real=0.00 secs]
> 1501.151: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1506.172:
> [CMS-concurrent-abortable-preclean: 0.712/5.022 secs] [Times:
> user=0.71 sys=0.00, real=5.02 secs]
> 1506.172: [GC[YG occupancy: 98890 K (118016 K)]1506.173: [Rescan
> (parallel) , 0.0113790 secs]1506.184: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 111740K(139444K), 0.0114830 secs]
> [Times: user=0.13 sys=0.00, real=0.02 secs]
> 1506.184: [CMS-concurrent-sweep-start]
> 1506.186: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1506.186: [CMS-concurrent-reset-start]
> 1506.195: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1508.196: [GC [1 CMS-initial-mark: 12849K(21428K)] 111868K(139444K),
> 0.0122930 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1508.208: [CMS-concurrent-mark-start]
> 1508.225: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1508.225: [CMS-concurrent-preclean-start]
> 1508.225: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1508.226: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1513.232:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1513.232: [GC[YG occupancy: 99339 K (118016 K)]1513.232: [Rescan
> (parallel) , 0.0123890 secs]1513.244: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 112189K(139444K), 0.0124930 secs]
> [Times: user=0.14 sys=0.00, real=0.02 secs]
> 1513.245: [CMS-concurrent-sweep-start]
> 1513.246: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1513.246: [CMS-concurrent-reset-start]
> 1513.255: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1515.256: [GC [1 CMS-initial-mark: 12849K(21428K)] 113182K(139444K),
> 0.0123210 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1515.268: [CMS-concurrent-mark-start]
> 1515.285: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1515.285: [CMS-concurrent-preclean-start]
> 1515.285: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1515.285: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1520.290:
> [CMS-concurrent-abortable-preclean: 0.699/5.004 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1520.290: [GC[YG occupancy: 100653 K (118016 K)]1520.290: [Rescan
> (parallel) , 0.0125490 secs]1520.303: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 113502K(139444K), 0.0126520 secs]
> [Times: user=0.14 sys=0.00, real=0.01 secs]
> 1520.303: [CMS-concurrent-sweep-start]
> 1520.304: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1520.304: [CMS-concurrent-reset-start]
> 1520.313: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1522.314: [GC [1 CMS-initial-mark: 12849K(21428K)] 113631K(139444K),
> 0.0118790 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1522.326: [CMS-concurrent-mark-start]
> 1522.343: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1522.343: [CMS-concurrent-preclean-start]
> 1522.343: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1522.343: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1527.350:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1527.350: [GC[YG occupancy: 101102 K (118016 K)]1527.350: [Rescan
> (parallel) , 0.0127460 secs]1527.363: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 113952K(139444K), 0.0128490 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1527.363: [CMS-concurrent-sweep-start]
> 1527.365: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1527.365: [CMS-concurrent-reset-start]
> 1527.374: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1529.374: [GC [1 CMS-initial-mark: 12849K(21428K)] 114080K(139444K),
> 0.0117550 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1529.386: [CMS-concurrent-mark-start]
> 1529.403: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1529.404: [CMS-concurrent-preclean-start]
> 1529.404: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1529.404: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1534.454:
> [CMS-concurrent-abortable-preclean: 0.712/5.050 secs] [Times:
> user=0.70 sys=0.01, real=5.05 secs]
> 1534.454: [GC[YG occupancy: 101591 K (118016 K)]1534.454: [Rescan
> (parallel) , 0.0122680 secs]1534.466: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 114441K(139444K), 0.0123750 secs]
> [Times: user=0.12 sys=0.02, real=0.01 secs]
> 1534.466: [CMS-concurrent-sweep-start]
> 1534.468: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1534.468: [CMS-concurrent-reset-start]
> 1534.478: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1536.478: [GC [1 CMS-initial-mark: 12849K(21428K)] 114570K(139444K),
> 0.0125250 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1536.491: [CMS-concurrent-mark-start]
> 1536.507: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1536.507: [CMS-concurrent-preclean-start]
> 1536.507: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1536.507: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1541.516:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1541.516: [GC[YG occupancy: 102041 K (118016 K)]1541.516: [Rescan
> (parallel) , 0.0088270 secs]1541.525: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 114890K(139444K), 0.0089300 secs]
> [Times: user=0.10 sys=0.00, real=0.01 secs]
> 1541.525: [CMS-concurrent-sweep-start]
> 1541.527: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1541.527: [CMS-concurrent-reset-start]
> 1541.537: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1543.537: [GC [1 CMS-initial-mark: 12849K(21428K)] 115019K(139444K),
> 0.0124500 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1543.550: [CMS-concurrent-mark-start]
> 1543.566: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1543.566: [CMS-concurrent-preclean-start]
> 1543.567: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1543.567: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1548.578:
> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1548.578: [GC[YG occupancy: 102490 K (118016 K)]1548.578: [Rescan
> (parallel) , 0.0100430 secs]1548.588: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 115340K(139444K), 0.0101440 secs]
> [Times: user=0.11 sys=0.00, real=0.01 secs]
> 1548.588: [CMS-concurrent-sweep-start]
> 1548.589: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1548.589: [CMS-concurrent-reset-start]
> 1548.598: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1550.598: [GC [1 CMS-initial-mark: 12849K(21428K)] 115468K(139444K),
> 0.0125070 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1550.611: [CMS-concurrent-mark-start]
> 1550.627: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1550.627: [CMS-concurrent-preclean-start]
> 1550.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1550.628: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1555.631:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1555.631: [GC[YG occupancy: 103003 K (118016 K)]1555.631: [Rescan
> (parallel) , 0.0117610 secs]1555.643: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 115853K(139444K), 0.0118770 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1555.643: [CMS-concurrent-sweep-start]
> 1555.645: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1555.645: [CMS-concurrent-reset-start]
> 1555.655: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1557.655: [GC [1 CMS-initial-mark: 12849K(21428K)] 115981K(139444K),
> 0.0126720 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1557.668: [CMS-concurrent-mark-start]
> 1557.685: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1557.685: [CMS-concurrent-preclean-start]
> 1557.685: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1557.685: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1562.688:
> [CMS-concurrent-abortable-preclean: 0.697/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1562.688: [GC[YG occupancy: 103557 K (118016 K)]1562.688: [Rescan
> (parallel) , 0.0121530 secs]1562.700: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 116407K(139444K), 0.0122560 secs]
> [Times: user=0.16 sys=0.00, real=0.01 secs]
> 1562.700: [CMS-concurrent-sweep-start]
> 1562.701: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1562.701: [CMS-concurrent-reset-start]
> 1562.710: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1562.821: [GC [1 CMS-initial-mark: 12849K(21428K)] 116514K(139444K),
> 0.0127240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1562.834: [CMS-concurrent-mark-start]
> 1562.852: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1562.852: [CMS-concurrent-preclean-start]
> 1562.853: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1562.853: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1567.859:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1567.859: [GC[YG occupancy: 104026 K (118016 K)]1567.859: [Rescan
> (parallel) , 0.0131290 secs]1567.872: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 116876K(139444K), 0.0132470 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1567.873: [CMS-concurrent-sweep-start]
> 1567.874: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1567.874: [CMS-concurrent-reset-start]
> 1567.883: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1569.883: [GC [1 CMS-initial-mark: 12849K(21428K)] 117103K(139444K),
> 0.0123770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1569.896: [CMS-concurrent-mark-start]
> 1569.913: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1569.913: [CMS-concurrent-preclean-start]
> 1569.913: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
> 1569.913: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1574.920:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1574.920: [GC[YG occupancy: 104510 K (118016 K)]1574.920: [Rescan
> (parallel) , 0.0122810 secs]1574.932: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 117360K(139444K), 0.0123870 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1574.933: [CMS-concurrent-sweep-start]
> 1574.935: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1574.935: [CMS-concurrent-reset-start]
> 1574.944: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1575.163: [GC [1 CMS-initial-mark: 12849K(21428K)] 117360K(139444K),
> 0.0121590 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
> 1575.176: [CMS-concurrent-mark-start]
> 1575.193: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1575.193: [CMS-concurrent-preclean-start]
> 1575.193: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
> 1575.193: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1580.197:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.71 sys=0.00, real=5.00 secs]
> 1580.197: [GC[YG occupancy: 104831 K (118016 K)]1580.197: [Rescan
> (parallel) , 0.0129860 secs]1580.210: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 117681K(139444K), 0.0130980 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1580.210: [CMS-concurrent-sweep-start]
> 1580.211: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1580.211: [CMS-concurrent-reset-start]
> 1580.220: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1582.220: [GC [1 CMS-initial-mark: 12849K(21428K)] 117809K(139444K),
> 0.0129700 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1582.234: [CMS-concurrent-mark-start]
> 1582.249: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.05
> sys=0.01, real=0.02 secs]
> 1582.249: [CMS-concurrent-preclean-start]
> 1582.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1582.249: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1587.262:
> [CMS-concurrent-abortable-preclean: 0.707/5.013 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1587.262: [GC[YG occupancy: 105280 K (118016 K)]1587.262: [Rescan
> (parallel) , 0.0134570 secs]1587.276: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 118130K(139444K), 0.0135720 secs]
> [Times: user=0.15 sys=0.00, real=0.02 secs]
> 1587.276: [CMS-concurrent-sweep-start]
> 1587.278: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1587.278: [CMS-concurrent-reset-start]
> 1587.287: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1589.287: [GC [1 CMS-initial-mark: 12849K(21428K)] 118258K(139444K),
> 0.0130010 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1589.301: [CMS-concurrent-mark-start]
> 1589.316: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1589.316: [CMS-concurrent-preclean-start]
> 1589.316: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1589.316: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1594.364:
> [CMS-concurrent-abortable-preclean: 0.712/5.048 secs] [Times:
> user=0.71 sys=0.00, real=5.05 secs]
> 1594.365: [GC[YG occupancy: 105770 K (118016 K)]1594.365: [Rescan
> (parallel) , 0.0131190 secs]1594.378: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 118620K(139444K), 0.0132380 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1594.378: [CMS-concurrent-sweep-start]
> 1594.380: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1594.380: [CMS-concurrent-reset-start]
> 1594.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1596.390: [GC [1 CMS-initial-mark: 12849K(21428K)] 118748K(139444K),
> 0.0130650 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1596.403: [CMS-concurrent-mark-start]
> 1596.418: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1596.418: [CMS-concurrent-preclean-start]
> 1596.419: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1596.419: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1601.422:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.69 sys=0.01, real=5.00 secs]
> 1601.422: [GC[YG occupancy: 106219 K (118016 K)]1601.422: [Rescan
> (parallel) , 0.0130310 secs]1601.435: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 119069K(139444K), 0.0131490 secs]
> [Times: user=0.16 sys=0.00, real=0.02 secs]
> 1601.435: [CMS-concurrent-sweep-start]
> 1601.437: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1601.437: [CMS-concurrent-reset-start]
> 1601.446: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1603.447: [GC [1 CMS-initial-mark: 12849K(21428K)] 119197K(139444K),
> 0.0130220 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1603.460: [CMS-concurrent-mark-start]
> 1603.476: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1603.476: [CMS-concurrent-preclean-start]
> 1603.476: [CMS-concurrent-preclean: 0.000/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1603.476: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1608.478:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1608.478: [GC[YG occupancy: 106668 K (118016 K)]1608.479: [Rescan
> (parallel) , 0.0122680 secs]1608.491: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 119518K(139444K), 0.0123790 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1608.491: [CMS-concurrent-sweep-start]
> 1608.492: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1608.492: [CMS-concurrent-reset-start]
> 1608.501: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1610.502: [GC [1 CMS-initial-mark: 12849K(21428K)] 119646K(139444K),
> 0.0130770 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1610.515: [CMS-concurrent-mark-start]
> 1610.530: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1610.530: [CMS-concurrent-preclean-start]
> 1610.530: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1610.530: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1615.536:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1615.536: [GC[YG occupancy: 107117 K (118016 K)]1615.536: [Rescan
> (parallel) , 0.0125470 secs]1615.549: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 119967K(139444K), 0.0126510 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1615.549: [CMS-concurrent-sweep-start]
> 1615.551: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1615.551: [CMS-concurrent-reset-start]
> 1615.561: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1617.561: [GC [1 CMS-initial-mark: 12849K(21428K)] 120095K(139444K),
> 0.0129520 secs] [Times: user=0.00 sys=0.00, real=0.02 secs]
> 1617.574: [CMS-concurrent-mark-start]
> 1617.591: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1617.591: [CMS-concurrent-preclean-start]
> 1617.591: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1617.591: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1622.598:
> [CMS-concurrent-abortable-preclean: 0.702/5.007 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1622.598: [GC[YG occupancy: 107777 K (118016 K)]1622.599: [Rescan
> (parallel) , 0.0140340 secs]1622.613: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 120627K(139444K), 0.0141520 secs]
> [Times: user=0.16 sys=0.00, real=0.01 secs]
> 1622.613: [CMS-concurrent-sweep-start]
> 1622.614: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1622.614: [CMS-concurrent-reset-start]
> 1622.623: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.02 secs]
> 1622.848: [GC [1 CMS-initial-mark: 12849K(21428K)] 120691K(139444K),
> 0.0133410 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1622.861: [CMS-concurrent-mark-start]
> 1622.878: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1622.878: [CMS-concurrent-preclean-start]
> 1622.879: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1622.879: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1627.941:
> [CMS-concurrent-abortable-preclean: 0.656/5.062 secs] [Times:
> user=0.65 sys=0.00, real=5.06 secs]
> 1627.941: [GC[YG occupancy: 108202 K (118016 K)]1627.941: [Rescan
> (parallel) , 0.0135120 secs]1627.955: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 121052K(139444K), 0.0136620 secs]
> [Times: user=0.15 sys=0.00, real=0.02 secs]
> 1627.955: [CMS-concurrent-sweep-start]
> 1627.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1627.956: [CMS-concurrent-reset-start]
> 1627.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1629.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 121180K(139444K),
> 0.0133770 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1629.979: [CMS-concurrent-mark-start]
> 1629.995: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1629.995: [CMS-concurrent-preclean-start]
> 1629.996: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1629.996: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1634.998:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.69 sys=0.00, real=5.00 secs]
> 1634.999: [GC[YG occupancy: 108651 K (118016 K)]1634.999: [Rescan
> (parallel) , 0.0134300 secs]1635.012: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 121501K(139444K), 0.0135530 secs]
> [Times: user=0.16 sys=0.00, real=0.01 secs]
> 1635.012: [CMS-concurrent-sweep-start]
> 1635.014: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1635.014: [CMS-concurrent-reset-start]
> 1635.023: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1637.023: [GC [1 CMS-initial-mark: 12849K(21428K)] 121629K(139444K),
> 0.0127330 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1637.036: [CMS-concurrent-mark-start]
> 1637.053: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1637.054: [CMS-concurrent-preclean-start]
> 1637.054: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1637.054: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1642.062:
> [CMS-concurrent-abortable-preclean: 0.703/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1642.062: [GC[YG occupancy: 109100 K (118016 K)]1642.062: [Rescan
> (parallel) , 0.0124310 secs]1642.075: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 121950K(139444K), 0.0125510 secs]
> [Times: user=0.16 sys=0.00, real=0.02 secs]
> 1642.075: [CMS-concurrent-sweep-start]
> 1642.077: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1642.077: [CMS-concurrent-reset-start]
> 1642.086: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1644.087: [GC [1 CMS-initial-mark: 12849K(21428K)] 122079K(139444K),
> 0.0134300 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1644.100: [CMS-concurrent-mark-start]
> 1644.116: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1644.116: [CMS-concurrent-preclean-start]
> 1644.116: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1644.116: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1649.125:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1649.126: [GC[YG occupancy: 109549 K (118016 K)]1649.126: [Rescan
> (parallel) , 0.0126870 secs]1649.138: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 122399K(139444K), 0.0128010 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1649.139: [CMS-concurrent-sweep-start]
> 1649.141: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1649.141: [CMS-concurrent-reset-start]
> 1649.150: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1651.150: [GC [1 CMS-initial-mark: 12849K(21428K)] 122528K(139444K),
> 0.0134790 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1651.164: [CMS-concurrent-mark-start]
> 1651.179: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1651.179: [CMS-concurrent-preclean-start]
> 1651.179: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1651.179: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1656.254:
> [CMS-concurrent-abortable-preclean: 0.722/5.074 secs] [Times:
> user=0.71 sys=0.01, real=5.07 secs]
> 1656.254: [GC[YG occupancy: 110039 K (118016 K)]1656.254: [Rescan
> (parallel) , 0.0092110 secs]1656.263: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 122889K(139444K), 0.0093170 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1656.263: [CMS-concurrent-sweep-start]
> 1656.266: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1656.266: [CMS-concurrent-reset-start]
> 1656.275: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1658.275: [GC [1 CMS-initial-mark: 12849K(21428K)] 123017K(139444K),
> 0.0134150 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1658.289: [CMS-concurrent-mark-start]
> 1658.305: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1658.306: [CMS-concurrent-preclean-start]
> 1658.306: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1658.306: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1663.393:
> [CMS-concurrent-abortable-preclean: 0.711/5.087 secs] [Times:
> user=0.71 sys=0.00, real=5.08 secs]
> 1663.393: [GC[YG occupancy: 110488 K (118016 K)]1663.393: [Rescan
> (parallel) , 0.0132450 secs]1663.406: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 123338K(139444K), 0.0133600 secs]
> [Times: user=0.15 sys=0.00, real=0.02 secs]
> 1663.407: [CMS-concurrent-sweep-start]
> 1663.409: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1663.409: [CMS-concurrent-reset-start]
> 1663.418: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1665.418: [GC [1 CMS-initial-mark: 12849K(21428K)] 123467K(139444K),
> 0.0135570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1665.432: [CMS-concurrent-mark-start]
> 1665.447: [CMS-concurrent-mark: 0.015/0.015 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1665.447: [CMS-concurrent-preclean-start]
> 1665.448: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1665.448: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1670.457:
> [CMS-concurrent-abortable-preclean: 0.704/5.009 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1670.457: [GC[YG occupancy: 110937 K (118016 K)]1670.457: [Rescan
> (parallel) , 0.0142820 secs]1670.471: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 123787K(139444K), 0.0144010 secs]
> [Times: user=0.16 sys=0.00, real=0.01 secs]
> 1670.472: [CMS-concurrent-sweep-start]
> 1670.473: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1670.473: [CMS-concurrent-reset-start]
> 1670.482: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1672.482: [GC [1 CMS-initial-mark: 12849K(21428K)] 123916K(139444K),
> 0.0136110 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1672.496: [CMS-concurrent-mark-start]
> 1672.513: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1672.513: [CMS-concurrent-preclean-start]
> 1672.513: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1672.513: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1677.530:
> [CMS-concurrent-abortable-preclean: 0.711/5.017 secs] [Times:
> user=0.71 sys=0.00, real=5.02 secs]
> 1677.530: [GC[YG occupancy: 111387 K (118016 K)]1677.530: [Rescan
> (parallel) , 0.0129210 secs]1677.543: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 124236K(139444K), 0.0130360 secs]
> [Times: user=0.16 sys=0.00, real=0.02 secs]
> 1677.543: [CMS-concurrent-sweep-start]
> 1677.545: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1677.545: [CMS-concurrent-reset-start]
> 1677.554: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1679.554: [GC [1 CMS-initial-mark: 12849K(21428K)] 124365K(139444K),
> 0.0125140 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1679.567: [CMS-concurrent-mark-start]
> 1679.584: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1679.584: [CMS-concurrent-preclean-start]
> 1679.584: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1679.584: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1684.631:
> [CMS-concurrent-abortable-preclean: 0.714/5.047 secs] [Times:
> user=0.72 sys=0.00, real=5.04 secs]
> 1684.631: [GC[YG occupancy: 112005 K (118016 K)]1684.631: [Rescan
> (parallel) , 0.0146760 secs]1684.646: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 124855K(139444K), 0.0147930 secs]
> [Times: user=0.16 sys=0.00, real=0.02 secs]
> 1684.646: [CMS-concurrent-sweep-start]
> 1684.648: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1684.648: [CMS-concurrent-reset-start]
> 1684.656: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1686.656: [GC [1 CMS-initial-mark: 12849K(21428K)] 125048K(139444K),
> 0.0138340 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1686.670: [CMS-concurrent-mark-start]
> 1686.686: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1686.686: [CMS-concurrent-preclean-start]
> 1686.687: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1686.687: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1691.689:
> [CMS-concurrent-abortable-preclean: 0.697/5.002 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1691.689: [GC[YG occupancy: 112518 K (118016 K)]1691.689: [Rescan
> (parallel) , 0.0142600 secs]1691.703: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 12849K(21428K)] 125368K(139444K), 0.0143810 secs]
> [Times: user=0.16 sys=0.00, real=0.02 secs]
> 1691.703: [CMS-concurrent-sweep-start]
> 1691.705: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1691.705: [CMS-concurrent-reset-start]
> 1691.714: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1693.714: [GC [1 CMS-initial-mark: 12849K(21428K)] 125497K(139444K),
> 0.0126710 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1693.727: [CMS-concurrent-mark-start]
> 1693.744: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1693.744: [CMS-concurrent-preclean-start]
> 1693.745: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1693.745: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1698.747:
> [CMS-concurrent-abortable-preclean: 0.698/5.003 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1698.748: [GC[YG occupancy: 112968 K (118016 K)]1698.748: [Rescan
> (parallel) , 0.0147370 secs]1698.762: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 125818K(139444K), 0.0148490 secs]
> [Times: user=0.17 sys=0.00, real=0.01 secs]
> 1698.763: [CMS-concurrent-sweep-start]
> 1698.764: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1698.764: [CMS-concurrent-reset-start]
> 1698.773: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1700.773: [GC [1 CMS-initial-mark: 12849K(21428K)] 125946K(139444K),
> 0.0128810 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1700.786: [CMS-concurrent-mark-start]
> 1700.804: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1700.804: [CMS-concurrent-preclean-start]
> 1700.804: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1700.804: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1705.810:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1705.810: [GC[YG occupancy: 113417 K (118016 K)]1705.810: [Rescan
> (parallel) , 0.0146750 secs]1705.825: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 126267K(139444K), 0.0147760 secs]
> [Times: user=0.17 sys=0.00, real=0.02 secs]
> 1705.825: [CMS-concurrent-sweep-start]
> 1705.827: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1705.827: [CMS-concurrent-reset-start]
> 1705.836: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1707.836: [GC [1 CMS-initial-mark: 12849K(21428K)] 126395K(139444K),
> 0.0137570 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1707.850: [CMS-concurrent-mark-start]
> 1707.866: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1707.866: [CMS-concurrent-preclean-start]
> 1707.867: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1707.867: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1712.878:
> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1712.878: [GC[YG occupancy: 113866 K (118016 K)]1712.878: [Rescan
> (parallel) , 0.0116340 secs]1712.890: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 126716K(139444K), 0.0117350 secs]
> [Times: user=0.12 sys=0.00, real=0.01 secs]
> 1712.890: [CMS-concurrent-sweep-start]
> 1712.893: [CMS-concurrent-sweep: 0.002/0.003 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1712.893: [CMS-concurrent-reset-start]
> 1712.902: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1714.902: [GC [1 CMS-initial-mark: 12849K(21428K)] 126984K(139444K),
> 0.0134590 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1714.915: [CMS-concurrent-mark-start]
> 1714.933: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1714.933: [CMS-concurrent-preclean-start]
> 1714.934: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1714.934: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1719.940:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.71 sys=0.00, real=5.00 secs]
> 1719.940: [GC[YG occupancy: 114552 K (118016 K)]1719.940: [Rescan
> (parallel) , 0.0141320 secs]1719.955: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 127402K(139444K), 0.0142280 secs]
> [Times: user=0.16 sys=0.01, real=0.02 secs]
> 1719.955: [CMS-concurrent-sweep-start]
> 1719.956: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1719.956: [CMS-concurrent-reset-start]
> 1719.965: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1721.966: [GC [1 CMS-initial-mark: 12849K(21428K)] 127530K(139444K),
> 0.0139120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1721.980: [CMS-concurrent-mark-start]
> 1721.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1721.996: [CMS-concurrent-preclean-start]
> 1721.997: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1721.997: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1727.010:
> [CMS-concurrent-abortable-preclean: 0.708/5.013 secs] [Times:
> user=0.71 sys=0.00, real=5.01 secs]
> 1727.010: [GC[YG occupancy: 115000 K (118016 K)]1727.010: [Rescan
> (parallel) , 0.0123190 secs]1727.023: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 12849K(21428K)] 127850K(139444K), 0.0124420 secs]
> [Times: user=0.15 sys=0.00, real=0.01 secs]
> 1727.023: [CMS-concurrent-sweep-start]
> 1727.024: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1727.024: [CMS-concurrent-reset-start]
> 1727.033: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1729.034: [GC [1 CMS-initial-mark: 12849K(21428K)] 127978K(139444K),
> 0.0129330 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1729.047: [CMS-concurrent-mark-start]
> 1729.064: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1729.064: [CMS-concurrent-preclean-start]
> 1729.064: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1729.064: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1734.075:
> [CMS-concurrent-abortable-preclean: 0.706/5.011 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1734.075: [GC[YG occupancy: 115449 K (118016 K)]1734.075: [Rescan
> (parallel) , 0.0131600 secs]1734.088: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 12849K(21428K)] 128298K(139444K), 0.0132810 secs]
> [Times: user=0.16 sys=0.00, real=0.01 secs]
> 1734.089: [CMS-concurrent-sweep-start]
> 1734.091: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1734.091: [CMS-concurrent-reset-start]
> 1734.100: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1736.100: [GC [1 CMS-initial-mark: 12849K(21428K)] 128427K(139444K),
> 0.0141000 secs] [Times: user=0.01 sys=0.00, real=0.02 secs]
> 1736.115: [CMS-concurrent-mark-start]
> 1736.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1736.131: [CMS-concurrent-preclean-start]
> 1736.131: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1736.131: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1741.139:
> [CMS-concurrent-abortable-preclean: 0.702/5.008 secs] [Times:
> user=0.70 sys=0.00, real=5.01 secs]
> 1741.139: [GC[YG occupancy: 115897 K (118016 K)]1741.139: [Rescan
> (parallel) , 0.0146880 secs]1741.154: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 12849K(21428K)] 128747K(139444K), 0.0148020 secs]
> [Times: user=0.17 sys=0.00, real=0.02 secs]
> 1741.154: [CMS-concurrent-sweep-start]
> 1741.156: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1741.156: [CMS-concurrent-reset-start]
> 1741.165: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1742.898: [GC [1 CMS-initial-mark: 12849K(21428K)] 129085K(139444K),
> 0.0144050 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1742.913: [CMS-concurrent-mark-start]
> 1742.931: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1742.931: [CMS-concurrent-preclean-start]
> 1742.932: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1742.932: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1748.016:
> [CMS-concurrent-abortable-preclean: 0.728/5.084 secs] [Times:
> user=0.73 sys=0.00, real=5.09 secs]
> 1748.016: [GC[YG occupancy: 116596 K (118016 K)]1748.016: [Rescan
> (parallel) , 0.0149950 secs]1748.031: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 129446K(139444K), 0.0150970 secs]
> [Times: user=0.17 sys=0.00, real=0.01 secs]
> 1748.031: [CMS-concurrent-sweep-start]
> 1748.033: [CMS-concurrent-sweep: 0.002/0.002 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1748.033: [CMS-concurrent-reset-start]
> 1748.041: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1750.042: [GC [1 CMS-initial-mark: 12849K(21428K)] 129574K(139444K),
> 0.0141840 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1750.056: [CMS-concurrent-mark-start]
> 1750.073: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1750.073: [CMS-concurrent-preclean-start]
> 1750.074: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1750.074: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1755.080:
> [CMS-concurrent-abortable-preclean: 0.701/5.006 secs] [Times:
> user=0.70 sys=0.00, real=5.00 secs]
> 1755.080: [GC[YG occupancy: 117044 K (118016 K)]1755.080: [Rescan
> (parallel) , 0.0155560 secs]1755.096: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 12849K(21428K)] 129894K(139444K), 0.0156580 secs]
> [Times: user=0.17 sys=0.00, real=0.02 secs]
> 1755.096: [CMS-concurrent-sweep-start]
> 1755.097: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1755.097: [CMS-concurrent-reset-start]
> 1755.105: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1756.660: [GC 1756.660: [ParNew: 117108K->482K(118016K), 0.0081410
> secs] 129958K->24535K(144568K), 0.0083030 secs] [Times: user=0.05
> sys=0.01, real=0.01 secs]
> 1756.668: [GC [1 CMS-initial-mark: 24053K(26552K)] 24599K(144568K),
> 0.0015280 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1756.670: [CMS-concurrent-mark-start]
> 1756.688: [CMS-concurrent-mark: 0.016/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1756.688: [CMS-concurrent-preclean-start]
> 1756.689: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1756.689: [GC[YG occupancy: 546 K (118016 K)]1756.689: [Rescan
> (parallel) , 0.0018170 secs]1756.691: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(26552K)] 24599K(144568K), 0.0019050 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1756.691: [CMS-concurrent-sweep-start]
> 1756.694: [CMS-concurrent-sweep: 0.004/0.004 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1756.694: [CMS-concurrent-reset-start]
> 1756.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1758.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 25372K(158108K),
> 0.0014030 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1758.705: [CMS-concurrent-mark-start]
> 1758.720: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.05
> sys=0.00, real=0.01 secs]
> 1758.720: [CMS-concurrent-preclean-start]
> 1758.720: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.01 sys=0.00, real=0.00 secs]
> 1758.721: [GC[YG occupancy: 1319 K (118016 K)]1758.721: [Rescan
> (parallel) , 0.0014940 secs]1758.722: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 25372K(158108K), 0.0015850 secs]
> [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1758.722: [CMS-concurrent-sweep-start]
> 1758.726: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1758.726: [CMS-concurrent-reset-start]
> 1758.735: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1760.735: [GC [1 CMS-initial-mark: 24053K(40092K)] 25565K(158108K),
> 0.0014530 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1760.737: [CMS-concurrent-mark-start]
> 1760.755: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1760.755: [CMS-concurrent-preclean-start]
> 1760.755: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1760.756: [GC[YG occupancy: 1512 K (118016 K)]1760.756: [Rescan
> (parallel) , 0.0014970 secs]1760.757: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 25565K(158108K), 0.0015980 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1760.757: [CMS-concurrent-sweep-start]
> 1760.761: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1760.761: [CMS-concurrent-reset-start]
> 1760.770: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1762.770: [GC [1 CMS-initial-mark: 24053K(40092K)] 25693K(158108K),
> 0.0013680 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1762.772: [CMS-concurrent-mark-start]
> 1762.788: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1762.788: [CMS-concurrent-preclean-start]
> 1762.788: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1762.788: [GC[YG occupancy: 1640 K (118016 K)]1762.789: [Rescan
> (parallel) , 0.0020360 secs]1762.791: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 25693K(158108K), 0.0021450 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1762.791: [CMS-concurrent-sweep-start]
> 1762.794: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1762.794: [CMS-concurrent-reset-start]
> 1762.803: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1764.804: [GC [1 CMS-initial-mark: 24053K(40092K)] 26747K(158108K),
> 0.0014620 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1764.805: [CMS-concurrent-mark-start]
> 1764.819: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1764.819: [CMS-concurrent-preclean-start]
> 1764.820: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1764.820: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1769.835:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 1769.835: [GC[YG occupancy: 3015 K (118016 K)]1769.835: [Rescan
> (parallel) , 0.0010360 secs]1769.836: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 27068K(158108K), 0.0011310 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1769.837: [CMS-concurrent-sweep-start]
> 1769.840: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1769.840: [CMS-concurrent-reset-start]
> 1769.849: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1771.850: [GC [1 CMS-initial-mark: 24053K(40092K)] 27196K(158108K),
> 0.0014740 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1771.851: [CMS-concurrent-mark-start]
> 1771.868: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1771.868: [CMS-concurrent-preclean-start]
> 1771.868: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1771.868: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1776.913:
> [CMS-concurrent-abortable-preclean: 0.112/5.044 secs] [Times:
> user=0.12 sys=0.00, real=5.04 secs]
> 1776.913: [GC[YG occupancy: 4052 K (118016 K)]1776.913: [Rescan
> (parallel) , 0.0017790 secs]1776.915: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 28105K(158108K), 0.0018790 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1776.915: [CMS-concurrent-sweep-start]
> 1776.918: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1776.918: [CMS-concurrent-reset-start]
> 1776.927: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1778.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 28233K(158108K),
> 0.0015470 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1778.929: [CMS-concurrent-mark-start]
> 1778.947: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1778.947: [CMS-concurrent-preclean-start]
> 1778.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1778.947: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1783.963:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1783.963: [GC[YG occupancy: 4505 K (118016 K)]1783.963: [Rescan
> (parallel) , 0.0014480 secs]1783.965: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 28558K(158108K), 0.0015470 secs]
> [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1783.965: [CMS-concurrent-sweep-start]
> 1783.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1783.968: [CMS-concurrent-reset-start]
> 1783.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1785.978: [GC [1 CMS-initial-mark: 24053K(40092K)] 28686K(158108K),
> 0.0015760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1785.979: [CMS-concurrent-mark-start]
> 1785.996: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1785.996: [CMS-concurrent-preclean-start]
> 1785.996: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1785.996: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1791.009:
> [CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1791.010: [GC[YG occupancy: 4954 K (118016 K)]1791.010: [Rescan
> (parallel) , 0.0020030 secs]1791.012: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 29007K(158108K), 0.0021040 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1791.012: [CMS-concurrent-sweep-start]
> 1791.015: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1791.015: [CMS-concurrent-reset-start]
> 1791.023: [CMS-concurrent-reset: 0.008/0.008 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1793.023: [GC [1 CMS-initial-mark: 24053K(40092K)] 29136K(158108K),
> 0.0017200 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1793.025: [CMS-concurrent-mark-start]
> 1793.044: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.08
> sys=0.00, real=0.02 secs]
> 1793.044: [CMS-concurrent-preclean-start]
> 1793.045: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1793.045: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1798.137:
> [CMS-concurrent-abortable-preclean: 0.112/5.093 secs] [Times:
> user=0.11 sys=0.00, real=5.09 secs]
> 1798.137: [GC[YG occupancy: 6539 K (118016 K)]1798.137: [Rescan
> (parallel) , 0.0016650 secs]1798.139: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 30592K(158108K), 0.0017600 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1798.139: [CMS-concurrent-sweep-start]
> 1798.143: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1798.143: [CMS-concurrent-reset-start]
> 1798.152: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1800.152: [GC [1 CMS-initial-mark: 24053K(40092K)] 30721K(158108K),
> 0.0016650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1800.154: [CMS-concurrent-mark-start]
> 1800.170: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1800.170: [CMS-concurrent-preclean-start]
> 1800.171: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1800.171: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1805.181:
> [CMS-concurrent-abortable-preclean: 0.110/5.010 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 1805.181: [GC[YG occupancy: 8090 K (118016 K)]1805.181: [Rescan
> (parallel) , 0.0018850 secs]1805.183: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 32143K(158108K), 0.0019860 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1805.183: [CMS-concurrent-sweep-start]
> 1805.187: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1805.187: [CMS-concurrent-reset-start]
> 1805.196: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1807.196: [GC [1 CMS-initial-mark: 24053K(40092K)] 32272K(158108K),
> 0.0018760 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1807.198: [CMS-concurrent-mark-start]
> 1807.216: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1807.216: [CMS-concurrent-preclean-start]
> 1807.216: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1807.216: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1812.232:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1812.232: [GC[YG occupancy: 8543 K (118016 K)]1812.232: [Rescan
> (parallel) , 0.0020890 secs]1812.234: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 32596K(158108K), 0.0021910 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1812.234: [CMS-concurrent-sweep-start]
> 1812.238: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1812.238: [CMS-concurrent-reset-start]
> 1812.247: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1812.928: [GC [1 CMS-initial-mark: 24053K(40092K)] 32661K(158108K),
> 0.0019710 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1812.930: [CMS-concurrent-mark-start]
> 1812.947: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1812.947: [CMS-concurrent-preclean-start]
> 1812.947: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1812.948: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1817.963:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1817.963: [GC[YG occupancy: 8928 K (118016 K)]1817.963: [Rescan
> (parallel) , 0.0011790 secs]1817.964: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 32981K(158108K), 0.0012750 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1817.964: [CMS-concurrent-sweep-start]
> 1817.968: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1817.968: [CMS-concurrent-reset-start]
> 1817.977: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1819.977: [GC [1 CMS-initial-mark: 24053K(40092K)] 33110K(158108K),
> 0.0018900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1819.979: [CMS-concurrent-mark-start]
> 1819.996: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1819.997: [CMS-concurrent-preclean-start]
> 1819.997: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1819.997: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1825.012:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1825.013: [GC[YG occupancy: 9377 K (118016 K)]1825.013: [Rescan
> (parallel) , 0.0020580 secs]1825.015: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 33431K(158108K), 0.0021510 secs]
> [Times: user=0.01 sys=0.00, real=0.01 secs]
> 1825.015: [CMS-concurrent-sweep-start]
> 1825.018: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1825.018: [CMS-concurrent-reset-start]
> 1825.027: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1827.028: [GC [1 CMS-initial-mark: 24053K(40092K)] 33559K(158108K),
> 0.0019140 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1827.030: [CMS-concurrent-mark-start]
> 1827.047: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1827.047: [CMS-concurrent-preclean-start]
> 1827.047: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1827.047: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1832.066:
> [CMS-concurrent-abortable-preclean: 0.109/5.018 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1832.066: [GC[YG occupancy: 9827 K (118016 K)]1832.066: [Rescan
> (parallel) , 0.0019440 secs]1832.068: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 33880K(158108K), 0.0020410 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1832.068: [CMS-concurrent-sweep-start]
> 1832.071: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1832.071: [CMS-concurrent-reset-start]
> 1832.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1832.935: [GC [1 CMS-initial-mark: 24053K(40092K)] 34093K(158108K),
> 0.0019830 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1832.937: [CMS-concurrent-mark-start]
> 1832.954: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1832.954: [CMS-concurrent-preclean-start]
> 1832.955: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1832.955: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1837.970:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1837.970: [GC[YG occupancy: 10349 K (118016 K)]1837.970: [Rescan
> (parallel) , 0.0019670 secs]1837.972: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 34402K(158108K), 0.0020800 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1837.972: [CMS-concurrent-sweep-start]
> 1837.976: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1837.976: [CMS-concurrent-reset-start]
> 1837.985: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1839.985: [GC [1 CMS-initial-mark: 24053K(40092K)] 34531K(158108K),
> 0.0020220 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1839.987: [CMS-concurrent-mark-start]
> 1840.005: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.06
> sys=0.01, real=0.02 secs]
> 1840.005: [CMS-concurrent-preclean-start]
> 1840.006: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1840.006: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1845.018:
> [CMS-concurrent-abortable-preclean: 0.106/5.012 secs] [Times:
> user=0.10 sys=0.01, real=5.01 secs]
> 1845.018: [GC[YG occupancy: 10798 K (118016 K)]1845.018: [Rescan
> (parallel) , 0.0015500 secs]1845.019: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 34851K(158108K), 0.0016500 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1845.020: [CMS-concurrent-sweep-start]
> 1845.023: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1845.023: [CMS-concurrent-reset-start]
> 1845.032: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1847.032: [GC [1 CMS-initial-mark: 24053K(40092K)] 34980K(158108K),
> 0.0020600 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1847.035: [CMS-concurrent-mark-start]
> 1847.051: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.01 secs]
> 1847.051: [CMS-concurrent-preclean-start]
> 1847.052: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1847.052: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1852.067:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 1852.067: [GC[YG occupancy: 11247 K (118016 K)]1852.067: [Rescan
> (parallel) , 0.0011880 secs]1852.069: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 35300K(158108K), 0.0012900 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1852.069: [CMS-concurrent-sweep-start]
> 1852.072: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1852.072: [CMS-concurrent-reset-start]
> 1852.081: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1854.082: [GC [1 CMS-initial-mark: 24053K(40092K)] 35429K(158108K),
> 0.0021010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1854.084: [CMS-concurrent-mark-start]
> 1854.100: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1854.100: [CMS-concurrent-preclean-start]
> 1854.101: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1854.101: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1859.116:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1859.116: [GC[YG occupancy: 11701 K (118016 K)]1859.117: [Rescan
> (parallel) , 0.0010230 secs]1859.118: [weak refs processing, 0.0000130
> secs] [1 CMS-remark: 24053K(40092K)] 35754K(158108K), 0.0011230 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1859.118: [CMS-concurrent-sweep-start]
> 1859.121: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1859.121: [CMS-concurrent-reset-start]
> 1859.130: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1861.131: [GC [1 CMS-initial-mark: 24053K(40092K)] 35882K(158108K),
> 0.0021240 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1861.133: [CMS-concurrent-mark-start]
> 1861.149: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1861.149: [CMS-concurrent-preclean-start]
> 1861.150: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1861.150: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1866.220:
> [CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
> user=0.12 sys=0.00, real=5.07 secs]
> 1866.220: [GC[YG occupancy: 12388 K (118016 K)]1866.220: [Rescan
> (parallel) , 0.0027090 secs]1866.223: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 36441K(158108K), 0.0028070 secs]
> [Times: user=0.02 sys=0.00, real=0.01 secs]
> 1866.223: [CMS-concurrent-sweep-start]
> 1866.227: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1866.227: [CMS-concurrent-reset-start]
> 1866.236: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1868.236: [GC [1 CMS-initial-mark: 24053K(40092K)] 36569K(158108K),
> 0.0023650 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1868.239: [CMS-concurrent-mark-start]
> 1868.256: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1868.256: [CMS-concurrent-preclean-start]
> 1868.257: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1868.257: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1873.267:
> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
> user=0.13 sys=0.00, real=5.01 secs]
> 1873.268: [GC[YG occupancy: 12837 K (118016 K)]1873.268: [Rescan
> (parallel) , 0.0018720 secs]1873.270: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 36890K(158108K), 0.0019730 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1873.270: [CMS-concurrent-sweep-start]
> 1873.273: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1873.273: [CMS-concurrent-reset-start]
> 1873.282: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1875.283: [GC [1 CMS-initial-mark: 24053K(40092K)] 37018K(158108K),
> 0.0024410 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1875.285: [CMS-concurrent-mark-start]
> 1875.302: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1875.302: [CMS-concurrent-preclean-start]
> 1875.302: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1875.303: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1880.318:
> [CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1880.318: [GC[YG occupancy: 13286 K (118016 K)]1880.318: [Rescan
> (parallel) , 0.0023860 secs]1880.321: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 37339K(158108K), 0.0024910 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1880.321: [CMS-concurrent-sweep-start]
> 1880.324: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1880.324: [CMS-concurrent-reset-start]
> 1880.333: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 1882.334: [GC [1 CMS-initial-mark: 24053K(40092K)] 37467K(158108K),
> 0.0024090 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1882.336: [CMS-concurrent-mark-start]
> 1882.352: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1882.352: [CMS-concurrent-preclean-start]
> 1882.353: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1882.353: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1887.368:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1887.368: [GC[YG occupancy: 13739 K (118016 K)]1887.368: [Rescan
> (parallel) , 0.0022370 secs]1887.370: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 37792K(158108K), 0.0023360 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1887.371: [CMS-concurrent-sweep-start]
> 1887.374: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1887.374: [CMS-concurrent-reset-start]
> 1887.383: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1889.384: [GC [1 CMS-initial-mark: 24053K(40092K)] 37920K(158108K),
> 0.0024690 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1889.386: [CMS-concurrent-mark-start]
> 1889.404: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1889.404: [CMS-concurrent-preclean-start]
> 1889.405: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.01 sys=0.00, real=0.00 secs]
> 1889.405: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1894.488:
> [CMS-concurrent-abortable-preclean: 0.112/5.083 secs] [Times:
> user=0.11 sys=0.00, real=5.08 secs]
> 1894.488: [GC[YG occupancy: 14241 K (118016 K)]1894.488: [Rescan
> (parallel) , 0.0020670 secs]1894.490: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 38294K(158108K), 0.0021630 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1894.490: [CMS-concurrent-sweep-start]
> 1894.494: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1894.494: [CMS-concurrent-reset-start]
> 1894.503: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1896.503: [GC [1 CMS-initial-mark: 24053K(40092K)] 38422K(158108K),
> 0.0025430 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1896.506: [CMS-concurrent-mark-start]
> 1896.524: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1896.524: [CMS-concurrent-preclean-start]
> 1896.525: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1896.525: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1901.540:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1901.540: [GC[YG occupancy: 14690 K (118016 K)]1901.540: [Rescan
> (parallel) , 0.0014810 secs]1901.542: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 38743K(158108K), 0.0015820 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1901.542: [CMS-concurrent-sweep-start]
> 1901.545: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1901.545: [CMS-concurrent-reset-start]
> 1901.555: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1903.555: [GC [1 CMS-initial-mark: 24053K(40092K)] 38871K(158108K),
> 0.0025990 secs] [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1903.558: [CMS-concurrent-mark-start]
> 1903.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1903.575: [CMS-concurrent-preclean-start]
> 1903.576: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1903.576: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1908.586:
> [CMS-concurrent-abortable-preclean: 0.105/5.010 secs] [Times:
> user=0.10 sys=0.00, real=5.01 secs]
> 1908.587: [GC[YG occupancy: 15207 K (118016 K)]1908.587: [Rescan
> (parallel) , 0.0026240 secs]1908.589: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 39260K(158108K), 0.0027260 secs]
> [Times: user=0.01 sys=0.00, real=0.00 secs]
> 1908.589: [CMS-concurrent-sweep-start]
> 1908.593: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1908.593: [CMS-concurrent-reset-start]
> 1908.602: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1910.602: [GC [1 CMS-initial-mark: 24053K(40092K)] 39324K(158108K),
> 0.0025610 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1910.605: [CMS-concurrent-mark-start]
> 1910.621: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1910.621: [CMS-concurrent-preclean-start]
> 1910.622: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.01 sys=0.00, real=0.00 secs]
> 1910.622: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1915.684:
> [CMS-concurrent-abortable-preclean: 0.112/5.062 secs] [Times:
> user=0.11 sys=0.00, real=5.07 secs]
> 1915.684: [GC[YG occupancy: 15592 K (118016 K)]1915.684: [Rescan
> (parallel) , 0.0023940 secs]1915.687: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 39645K(158108K), 0.0025050 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1915.687: [CMS-concurrent-sweep-start]
> 1915.690: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1915.690: [CMS-concurrent-reset-start]
> 1915.699: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1917.700: [GC [1 CMS-initial-mark: 24053K(40092K)] 39838K(158108K),
> 0.0025010 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1917.702: [CMS-concurrent-mark-start]
> 1917.719: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1917.719: [CMS-concurrent-preclean-start]
> 1917.719: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1917.719: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1922.735:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.01, real=5.02 secs]
> 1922.735: [GC[YG occupancy: 16198 K (118016 K)]1922.735: [Rescan
> (parallel) , 0.0028750 secs]1922.738: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 40251K(158108K), 0.0029760 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1922.738: [CMS-concurrent-sweep-start]
> 1922.741: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1922.741: [CMS-concurrent-reset-start]
> 1922.751: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1922.957: [GC [1 CMS-initial-mark: 24053K(40092K)] 40324K(158108K),
> 0.0027380 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1922.960: [CMS-concurrent-mark-start]
> 1922.978: [CMS-concurrent-mark: 0.017/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1922.978: [CMS-concurrent-preclean-start]
> 1922.979: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1922.979: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1927.994:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 1927.995: [GC[YG occupancy: 16645 K (118016 K)]1927.995: [Rescan
> (parallel) , 0.0013210 secs]1927.996: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 40698K(158108K), 0.0017610 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1927.996: [CMS-concurrent-sweep-start]
> 1928.000: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1928.000: [CMS-concurrent-reset-start]
> 1928.009: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1930.009: [GC [1 CMS-initial-mark: 24053K(40092K)] 40826K(158108K),
> 0.0028310 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1930.012: [CMS-concurrent-mark-start]
> 1930.028: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1930.028: [CMS-concurrent-preclean-start]
> 1930.029: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1930.029: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1935.044:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1935.045: [GC[YG occupancy: 17098 K (118016 K)]1935.045: [Rescan
> (parallel) , 0.0015440 secs]1935.046: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 41151K(158108K), 0.0016490 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1935.046: [CMS-concurrent-sweep-start]
> 1935.050: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1935.050: [CMS-concurrent-reset-start]
> 1935.059: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1937.059: [GC [1 CMS-initial-mark: 24053K(40092K)] 41279K(158108K),
> 0.0028290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1937.062: [CMS-concurrent-mark-start]
> 1937.079: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1937.079: [CMS-concurrent-preclean-start]
> 1937.079: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1937.079: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1942.095:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.01, real=5.02 secs]
> 1942.095: [GC[YG occupancy: 17547 K (118016 K)]1942.095: [Rescan
> (parallel) , 0.0030270 secs]1942.098: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 41600K(158108K), 0.0031250 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1942.098: [CMS-concurrent-sweep-start]
> 1942.101: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1942.101: [CMS-concurrent-reset-start]
> 1942.111: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1944.111: [GC [1 CMS-initial-mark: 24053K(40092K)] 41728K(158108K),
> 0.0028080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1944.114: [CMS-concurrent-mark-start]
> 1944.130: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1944.130: [CMS-concurrent-preclean-start]
> 1944.131: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1944.131: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1949.146:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1949.146: [GC[YG occupancy: 17996 K (118016 K)]1949.146: [Rescan
> (parallel) , 0.0028800 secs]1949.149: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 42049K(158108K), 0.0029810 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1949.149: [CMS-concurrent-sweep-start]
> 1949.152: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1949.152: [CMS-concurrent-reset-start]
> 1949.162: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1951.162: [GC [1 CMS-initial-mark: 24053K(40092K)] 42177K(158108K),
> 0.0028760 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1951.165: [CMS-concurrent-mark-start]
> 1951.184: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1951.184: [CMS-concurrent-preclean-start]
> 1951.184: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1951.184: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1956.244:
> [CMS-concurrent-abortable-preclean: 0.112/5.059 secs] [Times:
> user=0.11 sys=0.01, real=5.05 secs]
> 1956.244: [GC[YG occupancy: 18498 K (118016 K)]1956.244: [Rescan
> (parallel) , 0.0019760 secs]1956.246: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 42551K(158108K), 0.0020750 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 1956.246: [CMS-concurrent-sweep-start]
> 1956.249: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1956.249: [CMS-concurrent-reset-start]
> 1956.259: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1958.259: [GC [1 CMS-initial-mark: 24053K(40092K)] 42747K(158108K),
> 0.0029160 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1958.262: [CMS-concurrent-mark-start]
> 1958.279: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1958.279: [CMS-concurrent-preclean-start]
> 1958.279: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1958.279: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1963.295:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 1963.295: [GC[YG occupancy: 18951 K (118016 K)]1963.295: [Rescan
> (parallel) , 0.0020140 secs]1963.297: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 43004K(158108K), 0.0021100 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1963.297: [CMS-concurrent-sweep-start]
> 1963.300: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1963.300: [CMS-concurrent-reset-start]
> 1963.310: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1965.310: [GC [1 CMS-initial-mark: 24053K(40092K)] 43132K(158108K),
> 0.0029420 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1965.313: [CMS-concurrent-mark-start]
> 1965.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1965.329: [CMS-concurrent-preclean-start]
> 1965.330: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1965.330: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1970.345:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 1970.345: [GC[YG occupancy: 19400 K (118016 K)]1970.345: [Rescan
> (parallel) , 0.0031610 secs]1970.349: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 43453K(158108K), 0.0032580 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1970.349: [CMS-concurrent-sweep-start]
> 1970.352: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1970.352: [CMS-concurrent-reset-start]
> 1970.361: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1972.362: [GC [1 CMS-initial-mark: 24053K(40092K)] 43581K(158108K),
> 0.0029960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1972.365: [CMS-concurrent-mark-start]
> 1972.381: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 1972.381: [CMS-concurrent-preclean-start]
> 1972.382: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1972.382: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1977.397:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1977.398: [GC[YG occupancy: 19849 K (118016 K)]1977.398: [Rescan
> (parallel) , 0.0018110 secs]1977.399: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 43902K(158108K), 0.0019100 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1977.400: [CMS-concurrent-sweep-start]
> 1977.403: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1977.403: [CMS-concurrent-reset-start]
> 1977.412: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1979.413: [GC [1 CMS-initial-mark: 24053K(40092K)] 44031K(158108K),
> 0.0030240 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 1979.416: [CMS-concurrent-mark-start]
> 1979.434: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
> sys=0.00, real=0.02 secs]
> 1979.434: [CMS-concurrent-preclean-start]
> 1979.434: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1979.434: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1984.511:
> [CMS-concurrent-abortable-preclean: 0.112/5.077 secs] [Times:
> user=0.12 sys=0.00, real=5.07 secs]
> 1984.511: [GC[YG occupancy: 20556 K (118016 K)]1984.511: [Rescan
> (parallel) , 0.0032740 secs]1984.514: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 44609K(158108K), 0.0033720 secs]
> [Times: user=0.03 sys=0.00, real=0.01 secs]
> 1984.515: [CMS-concurrent-sweep-start]
> 1984.518: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1984.518: [CMS-concurrent-reset-start]
> 1984.527: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1986.528: [GC [1 CMS-initial-mark: 24053K(40092K)] 44737K(158108K),
> 0.0032890 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1986.531: [CMS-concurrent-mark-start]
> 1986.548: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 1986.548: [CMS-concurrent-preclean-start]
> 1986.548: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1986.548: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1991.564:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 1991.564: [GC[YG occupancy: 21005 K (118016 K)]1991.564: [Rescan
> (parallel) , 0.0022540 secs]1991.566: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 45058K(158108K), 0.0023650 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 1991.566: [CMS-concurrent-sweep-start]
> 1991.570: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 1991.570: [CMS-concurrent-reset-start]
> 1991.579: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 1993.579: [GC [1 CMS-initial-mark: 24053K(40092K)] 45187K(158108K),
> 0.0032480 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 1993.583: [CMS-concurrent-mark-start]
> 1993.599: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 1993.599: [CMS-concurrent-preclean-start]
> 1993.600: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 1993.600: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 1998.688:
> [CMS-concurrent-abortable-preclean: 0.112/5.089 secs] [Times:
> user=0.10 sys=0.01, real=5.09 secs]
> 1998.689: [GC[YG occupancy: 21454 K (118016 K)]1998.689: [Rescan
> (parallel) , 0.0025510 secs]1998.691: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 45507K(158108K), 0.0026500 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 1998.691: [CMS-concurrent-sweep-start]
> 1998.695: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 1998.695: [CMS-concurrent-reset-start]
> 1998.704: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 2000.704: [GC [1 CMS-initial-mark: 24053K(40092K)] 45636K(158108K),
> 0.0033350 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2000.708: [CMS-concurrent-mark-start]
> 2000.726: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2000.726: [CMS-concurrent-preclean-start]
> 2000.726: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2000.726: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2005.742:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 2005.742: [GC[YG occupancy: 21968 K (118016 K)]2005.742: [Rescan
> (parallel) , 0.0027300 secs]2005.745: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 46021K(158108K), 0.0028560 secs]
> [Times: user=0.02 sys=0.01, real=0.01 secs]
> 2005.745: [CMS-concurrent-sweep-start]
> 2005.748: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2005.748: [CMS-concurrent-reset-start]
> 2005.757: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.01, real=0.01 secs]
> 2007.758: [GC [1 CMS-initial-mark: 24053K(40092K)] 46217K(158108K),
> 0.0033290 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2007.761: [CMS-concurrent-mark-start]
> 2007.778: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2007.778: [CMS-concurrent-preclean-start]
> 2007.778: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2007.778: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2012.794:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 2012.794: [GC[YG occupancy: 22421 K (118016 K)]2012.794: [Rescan
> (parallel) , 0.0036890 secs]2012.798: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 46474K(158108K), 0.0037910 secs]
> [Times: user=0.02 sys=0.01, real=0.00 secs]
> 2012.798: [CMS-concurrent-sweep-start]
> 2012.801: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2012.801: [CMS-concurrent-reset-start]
> 2012.810: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2012.980: [GC [1 CMS-initial-mark: 24053K(40092K)] 46547K(158108K),
> 0.0033990 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 2012.984: [CMS-concurrent-mark-start]
> 2013.004: [CMS-concurrent-mark: 0.019/0.020 secs] [Times: user=0.06
> sys=0.01, real=0.02 secs]
> 2013.004: [CMS-concurrent-preclean-start]
> 2013.005: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2013.005: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2018.020:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 2018.020: [GC[YG occupancy: 22867 K (118016 K)]2018.020: [Rescan
> (parallel) , 0.0025180 secs]2018.023: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 46920K(158108K), 0.0026190 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 2018.023: [CMS-concurrent-sweep-start]
> 2018.026: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2018.026: [CMS-concurrent-reset-start]
> 2018.036: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2020.036: [GC [1 CMS-initial-mark: 24053K(40092K)] 47048K(158108K),
> 0.0034020 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2020.039: [CMS-concurrent-mark-start]
> 2020.057: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2020.057: [CMS-concurrent-preclean-start]
> 2020.058: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2020.058: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2025.073:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2025.073: [GC[YG occupancy: 23316 K (118016 K)]2025.073: [Rescan
> (parallel) , 0.0020110 secs]2025.075: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 47369K(158108K), 0.0021080 secs]
> [Times: user=0.02 sys=0.00, real=0.00 secs]
> 2025.075: [CMS-concurrent-sweep-start]
> 2025.079: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2025.079: [CMS-concurrent-reset-start]
> 2025.088: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2027.088: [GC [1 CMS-initial-mark: 24053K(40092K)] 47498K(158108K),
> 0.0034100 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2027.092: [CMS-concurrent-mark-start]
> 2027.108: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2027.108: [CMS-concurrent-preclean-start]
> 2027.109: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2027.109: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2032.120:
> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
> user=0.10 sys=0.00, real=5.01 secs]
> 2032.120: [GC[YG occupancy: 23765 K (118016 K)]2032.120: [Rescan
> (parallel) , 0.0025970 secs]2032.123: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 47818K(158108K), 0.0026940 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 2032.123: [CMS-concurrent-sweep-start]
> 2032.126: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2032.126: [CMS-concurrent-reset-start]
> 2032.135: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2034.136: [GC [1 CMS-initial-mark: 24053K(40092K)] 47951K(158108K),
> 0.0034720 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2034.139: [CMS-concurrent-mark-start]
> 2034.156: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2034.156: [CMS-concurrent-preclean-start]
> 2034.156: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2034.156: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2039.171:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2039.172: [GC[YG occupancy: 24218 K (118016 K)]2039.172: [Rescan
> (parallel) , 0.0038590 secs]2039.176: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 48271K(158108K), 0.0039560 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2039.176: [CMS-concurrent-sweep-start]
> 2039.179: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2039.179: [CMS-concurrent-reset-start]
> 2039.188: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2041.188: [GC [1 CMS-initial-mark: 24053K(40092K)] 48400K(158108K),
> 0.0035110 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2041.192: [CMS-concurrent-mark-start]
> 2041.209: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2041.209: [CMS-concurrent-preclean-start]
> 2041.209: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2041.209: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2046.268:
> [CMS-concurrent-abortable-preclean: 0.108/5.058 secs] [Times:
> user=0.12 sys=0.00, real=5.06 secs]
> 2046.268: [GC[YG occupancy: 24813 K (118016 K)]2046.268: [Rescan
> (parallel) , 0.0042050 secs]2046.272: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 48866K(158108K), 0.0043070 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2046.272: [CMS-concurrent-sweep-start]
> 2046.275: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2046.275: [CMS-concurrent-reset-start]
> 2046.285: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2048.285: [GC [1 CMS-initial-mark: 24053K(40092K)] 48994K(158108K),
> 0.0037700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2048.289: [CMS-concurrent-mark-start]
> 2048.307: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2048.307: [CMS-concurrent-preclean-start]
> 2048.307: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2048.307: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2053.323:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2053.323: [GC[YG occupancy: 25262 K (118016 K)]2053.323: [Rescan
> (parallel) , 0.0030780 secs]2053.326: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 49315K(158108K), 0.0031760 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2053.326: [CMS-concurrent-sweep-start]
> 2053.329: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2053.329: [CMS-concurrent-reset-start]
> 2053.338: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2055.339: [GC [1 CMS-initial-mark: 24053K(40092K)] 49444K(158108K),
> 0.0037730 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2055.343: [CMS-concurrent-mark-start]
> 2055.359: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2055.359: [CMS-concurrent-preclean-start]
> 2055.360: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2055.360: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2060.373:
> [CMS-concurrent-abortable-preclean: 0.107/5.013 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2060.373: [GC[YG occupancy: 25715 K (118016 K)]2060.373: [Rescan
> (parallel) , 0.0037090 secs]2060.377: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 49768K(158108K), 0.0038110 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2060.377: [CMS-concurrent-sweep-start]
> 2060.380: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2060.380: [CMS-concurrent-reset-start]
> 2060.389: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2062.390: [GC [1 CMS-initial-mark: 24053K(40092K)] 49897K(158108K),
> 0.0037860 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2062.394: [CMS-concurrent-mark-start]
> 2062.410: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2062.410: [CMS-concurrent-preclean-start]
> 2062.411: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2062.411: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2067.426:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 2067.427: [GC[YG occupancy: 26231 K (118016 K)]2067.427: [Rescan
> (parallel) , 0.0031980 secs]2067.430: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 50284K(158108K), 0.0033100 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2067.430: [CMS-concurrent-sweep-start]
> 2067.433: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2067.433: [CMS-concurrent-reset-start]
> 2067.443: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2069.443: [GC [1 CMS-initial-mark: 24053K(40092K)] 50412K(158108K),
> 0.0038060 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 2069.447: [CMS-concurrent-mark-start]
> 2069.465: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2069.465: [CMS-concurrent-preclean-start]
> 2069.465: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2069.465: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2074.535:
> [CMS-concurrent-abortable-preclean: 0.112/5.070 secs] [Times:
> user=0.12 sys=0.00, real=5.06 secs]
> 2074.535: [GC[YG occupancy: 26749 K (118016 K)]2074.535: [Rescan
> (parallel) , 0.0040450 secs]2074.539: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 50802K(158108K), 0.0041460 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2074.539: [CMS-concurrent-sweep-start]
> 2074.543: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2074.543: [CMS-concurrent-reset-start]
> 2074.552: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2076.552: [GC [1 CMS-initial-mark: 24053K(40092K)] 50930K(158108K),
> 0.0038960 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 2076.556: [CMS-concurrent-mark-start]
> 2076.575: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2076.575: [CMS-concurrent-preclean-start]
> 2076.575: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2076.575: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2081.590:
> [CMS-concurrent-abortable-preclean: 0.109/5.014 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2081.590: [GC[YG occupancy: 27198 K (118016 K)]2081.590: [Rescan
> (parallel) , 0.0042420 secs]2081.594: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 51251K(158108K), 0.0043450 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2081.594: [CMS-concurrent-sweep-start]
> 2081.597: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2081.597: [CMS-concurrent-reset-start]
> 2081.607: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2083.607: [GC [1 CMS-initial-mark: 24053K(40092K)] 51447K(158108K),
> 0.0038630 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2083.611: [CMS-concurrent-mark-start]
> 2083.628: [CMS-concurrent-mark: 0.017/0.017 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2083.628: [CMS-concurrent-preclean-start]
> 2083.628: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2083.628: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2088.642:
> [CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2088.642: [GC[YG occupancy: 27651 K (118016 K)]2088.642: [Rescan
> (parallel) , 0.0031520 secs]2088.645: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 51704K(158108K), 0.0032520 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2088.645: [CMS-concurrent-sweep-start]
> 2088.649: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2088.649: [CMS-concurrent-reset-start]
> 2088.658: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2090.658: [GC [1 CMS-initial-mark: 24053K(40092K)] 51832K(158108K),
> 0.0039130 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2090.662: [CMS-concurrent-mark-start]
> 2090.678: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2090.678: [CMS-concurrent-preclean-start]
> 2090.679: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2090.679: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2095.690:
> [CMS-concurrent-abortable-preclean: 0.105/5.011 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2095.690: [GC[YG occupancy: 28100 K (118016 K)]2095.690: [Rescan
> (parallel) , 0.0024460 secs]2095.693: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 52153K(158108K), 0.0025460 secs]
> [Times: user=0.03 sys=0.00, real=0.00 secs]
> 2095.693: [CMS-concurrent-sweep-start]
> 2095.696: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2095.696: [CMS-concurrent-reset-start]
> 2095.705: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2096.616: [GC [1 CMS-initial-mark: 24053K(40092K)] 53289K(158108K),
> 0.0039340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2096.620: [CMS-concurrent-mark-start]
> 2096.637: [CMS-concurrent-mark: 0.016/0.017 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2096.637: [CMS-concurrent-preclean-start]
> 2096.638: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2096.638: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2101.654:
> [CMS-concurrent-abortable-preclean: 0.110/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 2101.654: [GC[YG occupancy: 29557 K (118016 K)]2101.654: [Rescan
> (parallel) , 0.0034020 secs]2101.657: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 53610K(158108K), 0.0035000 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2101.657: [CMS-concurrent-sweep-start]
> 2101.661: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2101.661: [CMS-concurrent-reset-start]
> 2101.670: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2103.004: [GC [1 CMS-initial-mark: 24053K(40092K)] 53997K(158108K),
> 0.0042590 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2103.009: [CMS-concurrent-mark-start]
> 2103.027: [CMS-concurrent-mark: 0.017/0.019 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2103.027: [CMS-concurrent-preclean-start]
> 2103.028: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2103.028: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2108.043:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.10 sys=0.01, real=5.02 secs]
> 2108.043: [GC[YG occupancy: 30385 K (118016 K)]2108.044: [Rescan
> (parallel) , 0.0048950 secs]2108.048: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 54438K(158108K), 0.0049930 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2108.049: [CMS-concurrent-sweep-start]
> 2108.052: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2108.052: [CMS-concurrent-reset-start]
> 2108.061: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2110.062: [GC [1 CMS-initial-mark: 24053K(40092K)] 54502K(158108K),
> 0.0042120 secs] [Times: user=0.01 sys=0.00, real=0.01 secs]
> 2110.066: [CMS-concurrent-mark-start]
> 2110.084: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2110.084: [CMS-concurrent-preclean-start]
> 2110.085: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2110.085: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2115.100:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2115.101: [GC[YG occupancy: 30770 K (118016 K)]2115.101: [Rescan
> (parallel) , 0.0049040 secs]2115.106: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 54823K(158108K), 0.0050080 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2115.106: [CMS-concurrent-sweep-start]
> 2115.109: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2115.109: [CMS-concurrent-reset-start]
> 2115.118: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2117.118: [GC [1 CMS-initial-mark: 24053K(40092K)] 54952K(158108K),
> 0.0042490 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2117.123: [CMS-concurrent-mark-start]
> 2117.139: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2117.139: [CMS-concurrent-preclean-start]
> 2117.140: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2117.140: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2122.155:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.02 secs]
> 2122.155: [GC[YG occupancy: 31219 K (118016 K)]2122.155: [Rescan
> (parallel) , 0.0036460 secs]2122.159: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 55272K(158108K), 0.0037440 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2122.159: [CMS-concurrent-sweep-start]
> 2122.162: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2122.162: [CMS-concurrent-reset-start]
> 2122.172: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2124.172: [GC [1 CMS-initial-mark: 24053K(40092K)] 55401K(158108K),
> 0.0043010 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 2124.176: [CMS-concurrent-mark-start]
> 2124.195: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2124.195: [CMS-concurrent-preclean-start]
> 2124.195: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2124.195: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2129.211:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 2129.211: [GC[YG occupancy: 31669 K (118016 K)]2129.211: [Rescan
> (parallel) , 0.0049870 secs]2129.216: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 55722K(158108K), 0.0050860 secs]
> [Times: user=0.04 sys=0.00, real=0.01 secs]
> 2129.216: [CMS-concurrent-sweep-start]
> 2129.219: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 2129.219: [CMS-concurrent-reset-start]
> 2129.228: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2131.229: [GC [1 CMS-initial-mark: 24053K(40092K)] 55850K(158108K),
> 0.0042340 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2131.233: [CMS-concurrent-mark-start]
> 2131.249: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2131.249: [CMS-concurrent-preclean-start]
> 2131.249: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2131.249: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2136.292:
> [CMS-concurrent-abortable-preclean: 0.108/5.042 secs] [Times:
> user=0.11 sys=0.00, real=5.04 secs]
> 2136.292: [GC[YG occupancy: 32174 K (118016 K)]2136.292: [Rescan
> (parallel) , 0.0037250 secs]2136.296: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 56227K(158108K), 0.0038250 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 2136.296: [CMS-concurrent-sweep-start]
> 2136.299: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2136.299: [CMS-concurrent-reset-start]
> 2136.308: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2138.309: [GC [1 CMS-initial-mark: 24053K(40092K)] 56356K(158108K),
> 0.0043040 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2138.313: [CMS-concurrent-mark-start]
> 2138.329: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.05
> sys=0.01, real=0.02 secs]
> 2138.329: [CMS-concurrent-preclean-start]
> 2138.329: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2138.329: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2143.341:
> [CMS-concurrent-abortable-preclean: 0.106/5.011 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2143.341: [GC[YG occupancy: 32623 K (118016 K)]2143.341: [Rescan
> (parallel) , 0.0038660 secs]2143.345: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 56676K(158108K), 0.0039760 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 2143.345: [CMS-concurrent-sweep-start]
> 2143.349: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2143.349: [CMS-concurrent-reset-start]
> 2143.358: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2145.358: [GC [1 CMS-initial-mark: 24053K(40092K)] 56805K(158108K),
> 0.0043390 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2145.362: [CMS-concurrent-mark-start]
> 2145.379: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2145.379: [CMS-concurrent-preclean-start]
> 2145.379: [CMS-concurrent-preclean: 0.000/0.000 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2145.379: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2150.393:
> [CMS-concurrent-abortable-preclean: 0.108/5.014 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2150.393: [GC[YG occupancy: 33073 K (118016 K)]2150.393: [Rescan
> (parallel) , 0.0038190 secs]2150.397: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 57126K(158108K), 0.0039210 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 2150.397: [CMS-concurrent-sweep-start]
> 2150.400: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2150.400: [CMS-concurrent-reset-start]
> 2150.410: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2152.410: [GC [1 CMS-initial-mark: 24053K(40092K)] 57254K(158108K),
> 0.0044080 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2152.415: [CMS-concurrent-mark-start]
> 2152.431: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2152.431: [CMS-concurrent-preclean-start]
> 2152.432: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2152.432: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2157.447:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.01, real=5.02 secs]
> 2157.447: [GC[YG occupancy: 33522 K (118016 K)]2157.447: [Rescan
> (parallel) , 0.0038130 secs]2157.451: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 57575K(158108K), 0.0039160 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2157.451: [CMS-concurrent-sweep-start]
> 2157.454: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2157.454: [CMS-concurrent-reset-start]
> 2157.464: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 2159.464: [GC [1 CMS-initial-mark: 24053K(40092K)] 57707K(158108K),
> 0.0045170 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2159.469: [CMS-concurrent-mark-start]
> 2159.483: [CMS-concurrent-mark: 0.014/0.014 secs] [Times: user=0.06
> sys=0.00, real=0.01 secs]
> 2159.483: [CMS-concurrent-preclean-start]
> 2159.483: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2159.483: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2164.491:
> [CMS-concurrent-abortable-preclean: 0.111/5.007 secs] [Times:
> user=0.12 sys=0.00, real=5.01 secs]
> 2164.491: [GC[YG occupancy: 34293 K (118016 K)]2164.491: [Rescan
> (parallel) , 0.0052070 secs]2164.496: [weak refs processing, 0.0000120
> secs] [1 CMS-remark: 24053K(40092K)] 58347K(158108K), 0.0053130 secs]
> [Times: user=0.06 sys=0.00, real=0.01 secs]
> 2164.496: [CMS-concurrent-sweep-start]
> 2164.500: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2164.500: [CMS-concurrent-reset-start]
> 2164.509: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.01, real=0.01 secs]
> 2166.509: [GC [1 CMS-initial-mark: 24053K(40092K)] 58475K(158108K),
> 0.0045900 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2166.514: [CMS-concurrent-mark-start]
> 2166.533: [CMS-concurrent-mark: 0.019/0.019 secs] [Times: user=0.07
> sys=0.00, real=0.02 secs]
> 2166.533: [CMS-concurrent-preclean-start]
> 2166.533: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2166.533: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2171.549:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.02 secs]
> 2171.549: [GC[YG occupancy: 34743 K (118016 K)]2171.549: [Rescan
> (parallel) , 0.0052200 secs]2171.554: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 58796K(158108K), 0.0053210 secs]
> [Times: user=0.05 sys=0.00, real=0.01 secs]
> 2171.554: [CMS-concurrent-sweep-start]
> 2171.558: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2171.558: [CMS-concurrent-reset-start]
> 2171.567: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2173.567: [GC [1 CMS-initial-mark: 24053K(40092K)] 58924K(158108K),
> 0.0046700 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 2173.572: [CMS-concurrent-mark-start]
> 2173.588: [CMS-concurrent-mark: 0.016/0.016 secs] [Times: user=0.06
> sys=0.00, real=0.02 secs]
> 2173.588: [CMS-concurrent-preclean-start]
> 2173.589: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2173.589: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2178.604:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.10 sys=0.01, real=5.02 secs]
> 2178.605: [GC[YG occupancy: 35192 K (118016 K)]2178.605: [Rescan
> (parallel) , 0.0041460 secs]2178.609: [weak refs processing, 0.0000110
> secs] [1 CMS-remark: 24053K(40092K)] 59245K(158108K), 0.0042450 secs]
> [Times: user=0.04 sys=0.00, real=0.00 secs]
> 2178.609: [CMS-concurrent-sweep-start]
> 2178.612: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.01
> sys=0.00, real=0.00 secs]
> 2178.612: [CMS-concurrent-reset-start]
> 2178.622: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.00
> sys=0.00, real=0.01 secs]
> 2180.622: [GC [1 CMS-initial-mark: 24053K(40092K)] 59373K(158108K),
> 0.0047200 secs] [Times: user=0.00 sys=0.00, real=0.01 secs]
> 2180.627: [CMS-concurrent-mark-start]
> 2180.645: [CMS-concurrent-mark: 0.018/0.018 secs] [Times: user=0.08
> sys=0.00, real=0.02 secs]
> 2180.645: [CMS-concurrent-preclean-start]
> 2180.645: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
> 2180.645: [CMS-concurrent-abortable-preclean-start]
> CMS: abort preclean due to time 2185.661:
> [CMS-concurrent-abortable-preclean: 0.109/5.015 secs] [Times:
> user=0.11 sys=0.00, real=5.01 secs]
> 2185.661: [GC[YG occupancy: 35645 K (118016 K)]2185.661: [Rescan
> (parallel) , 0.0050730 secs]2185.666: [weak refs processing, 0.0000100
> secs] [1 CMS-remark: 24053K(40092K)] 59698K(158108K), 0.0051720 secs]
> [Times: user=0.04 sys=0.01, real=0.01 secs]
> 2185.666: [CMS-concurrent-sweep-start]
> 2185.670: [CMS-concurrent-sweep: 0.003/0.003 secs] [Times: user=0.00
> sys=0.00, real=0.00 secs]
> 2185.670: [CMS-concurrent-reset-start]
> 2185.679: [CMS-concurrent-reset: 0.009/0.009 secs] [Times: user=0.01
> sys=0.00, real=0.01 secs]
> 2187.679: [GC [1 CMS-initial-mark: 24053K(40092K)] 59826K(158108K),
> 0.0047350 secs]
> 
> --
> gregross:)
>