You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by jingych <ji...@neusoft.com> on 2014/04/30 03:26:47 UTC

Java Client Write Data blocked

Hi, All!

I need help!

I run the java client to write 3million data into HBase,

but when wrote almost 1 million, the process was blocked without any exception.

Does anyone know the possible reason? So i can find the solution.

Thanks All! 

By the way, the HBase Version is 0.94.6-cdh4.5.0!

Thanks again!




jingych
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: Re: Java Client Write Data blocked

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Look at when your data injection stopped, and look at the logs close to
that date and see if there is exceptions of anything. You can paste part of
those logs into pastebin.com and post the link here.

You might want to run some Performance Evaluation tests against your
cluster to see if it gives expected results...

JM


2014-04-29 21:50 GMT-04:00 jingych <ji...@neusoft.com>:

>  Thanks JM.
>
> But the log is too big, How can I post the log file?
>
> The query from HBase is slower too.
>
> The Network is OK, I'm sure.
>
> Does GC have the log file ? And how to know the swap?
>
> Sorry, I'm rookie.
>
> ------------------------------
>
> jingych
>
>  *From:* Jean-Marc Spaggiari <je...@spaggiari.org>
> *Date:* 2014-04-30 09:30
> *To:* user <us...@hbase.apache.org>; jingych <ji...@neusoft.com>
> *Subject:* Re: Java Client Write Data blocked
>   Any logs?
>
> Gargabe collection on the server side? Network issue? Swap?
>
> Please share your master and region servers logs so we can provide
> feedback.
>
> JM
>
>
> 2014-04-29 21:26 GMT-04:00 jingych <ji...@neusoft.com>:
>
>> Hi, All!
>>
>> I need help!
>>
>> I run the java client to write 3million data into HBase,
>>
>> but when wrote almost 1 million, the process was blocked without any
>> exception.
>>
>> Does anyone know the possible reason? So i can find the solution.
>>
>> Thanks All!
>>
>> By the way, the HBase Version is 0.94.6-cdh4.5.0!
>>
>> Thanks again!
>>
>>
>>
>>
>> jingych
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>>  storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Re: Re: Java Client Write Data blocked

Posted by Adrien Mogenet <ad...@gmail.com>.
Nope, HTable#put(Put) is a single synchronous call.

As JM said, use HTable#put(List<Put>) to send several Put operations to
HBase.


On Sun, May 4, 2014 at 4:19 AM, jingych <ji...@neusoft.com> wrote:

> Thanks JM!
>
> Sorry for reply you so late!
>
> I set the client buffer with 2M, and execute with table#put(Put).  Is this
> a batch commit?
>
> We have 3 nodes for test with 8G memory.
> And the Region Heap Size is the default - 256M.
> This config will cause the one thread blocked.
>
> Then I increase the region heap size to 2g, and the 3 write thread blocked.
>
>
> Best Regards!
>
>
>
>
> 井玉成
>
> 基础软件事业部
> 东软集团股份有限公司
> 手机:13889491801
> 电话:0411-84835702
>
> 大连市甘井子区黄浦路901号 D1座217室
> Postcode:116085
> Email:jingych@neusoft.com
>
> From: Jean-Marc Spaggiari
> Date: 2014-04-30 20:04
> To: jingych
> CC: user
> Subject: Re: Re: Java Client Write Data blocked
> Compactions are "normal".
>
>
> How to you write your data into HBase?
>
> - put(Put)
>
> - put(<Puts>)
>
> - batch(<Puts>)
>
> - Bulk Loading
>
> - etc.
>
>
> What is your HBase configuration?
>
> - Heap size?
>
> - Regions sizes?
>
> - etc.
>
>
> How much memory do you have on each node?
> How many nodes do you have?
>
>
> also, you have nothing on the logs between 2014-04-30 10:17:11,961  and
> 2014-04-30 11:10:14,842?
>
>
>
>
> 2014-04-30 2:19 GMT-04:00 jingych <ji...@neusoft.com>:
>
> Hi,JM!
>
> I found the {responseTooSlow} info, And the region do the "Starting
> compaction ", "Delete Family Bloom filter ", "Renaming compacted file " and
> "Completed major compaction " again and again!
>
> Could you please tell me what's going on?
>
> Thanks!
>
> 2014-04-30 10:16:37,044 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter
> type for
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d:
> CompoundBloomFilterWriter
> 2014-04-30 10:16:40,234 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
> DeleteFamily was added to HFile
> (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d)
> 2014-04-30 10:16:40,234 INFO org.apache.hadoop.hbase.regionserver.Store:
> Flushed , sequenceid=834174, memsize=65.2 M, into tmp file
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d
> 2014-04-30 10:16:40,264 INFO org.apache.hadoop.hbase.regionserver.Store:
> Added
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/i/9de4b874dfbc4d5292fce37be162fd1d,
> entries=319488, sequenceid=834174, filesize=22.9 M
> 2014-04-30 10:16:40,266 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Finished memstore flush of ~65.2 M/68412616, currentsize=2.5 M/2622152 for
> region
> gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb.
> in 3276ms, sequenceid=834174, compaction requested=true
> 2014-04-30 10:16:40,266 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on i in region
> gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb.
> 2014-04-30 10:16:40,267 INFO org.apache.hadoop.hbase.regionserver.Store:
> Starting compaction of 3 file(s) in i of
> gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb.
> into
> tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp,
> seqid=834174, totalSize=161.7 M
> 2014-04-30 10:16:40,281 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter
> type for
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924:
> CompoundBloomFilterWriter
> 2014-04-30 10:17:11,854 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
> DeleteFamily was added to HFile
> (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924)
> 2014-04-30 10:17:11,884 INFO org.apache.hadoop.hbase.regionserver.Store:
> Renaming compacted file at
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924
> to
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/i/87d8063d5b0c4e2fbf1196613cad6924
> 2014-04-30 10:17:11,960 INFO org.apache.hadoop.hbase.regionserver.Store:
> Completed compaction of 3 file(s) in i of
> gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb.
> into 87d8063d5b0c4e2fbf1196613cad6924, size=161.7 M; total size for store
> is 664.3 M
> 2014-04-30 10:17:11,961 INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest:
> completed compaction:
> regionName=gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb.,
> storeName=i, fileCount=3, fileSize=161.7 M, priority=3,
> time=2853849519897449; duration=31sec
> 2014-04-30 11:10:14,842 WARN org.apache.hadoop.ipc.HBaseServer:
> (responseTooSlow):
> {"processingtimems":11689,"call":"next(8855097496052270204, 1024), rpc
> version=1, client version=29, methodsFingerPrint=-1368823753","client":"
> 192.168.196.214:34626
> ","starttimems":1398827403069,"queuetimems":0,"class":"HRegionServer","responsesize":6,"method":"next"}
> 2014-04-30 11:11:02,658 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner
> 7192967863259784397 lease expired on region
> test_cdc,279790,1396578140253.0d3d27285aa872ec855d282c1705811a.
> 2014-04-30 11:11:02,674 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner
> -9210070512105072073 lease expired on region
> test_cdc,460072a7-0637-4d73-bc85-38cf78b05ec1,1396579225171.4bf9a540ba138b9cf944bac2110085be.
> 2014-04-30 11:11:02,719 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner
> -2077930805323086067 lease expired on region
> test_cdc,9dccc20f-a1b0-4455-8cbf-d09023f8d28e,1396578743616.8b698daab6cd3975bb0ddb557fefcca9.
> 2014-04-30 11:12:29,999 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner
> 7998402181053931475 lease expired on region
> test_cdc,279790,1396578140253.0d3d27285aa872ec855d282c1705811a.
> 2014-04-30 11:12:30,014 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner
> 2545895780242061091 lease expired on region
> test_cdc,460072a7-0637-4d73-bc85-38cf78b05ec1,1396579225171.4bf9a540ba138b9cf944bac2110085be.
> 2014-04-30 11:12:30,058 INFO
> org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner
> 3936133421604150523 lease expired on region
> test_cdc,9dccc20f-a1b0-4455-8cbf-d09023f8d28e,1396578743616.8b698daab6cd3975bb0ddb557fefcca9.
> 2014-04-30 11:14:46,707 INFO
> org.apache.hadoop.hbase.regionserver.wal.HLog: Roll
> /hbase/.logs/cdh-datanode3,60020,1398822403800/cdh-datanode3%2C60020%2C1398822403800.1398824086564,
> entries=822, filesize=73174240.  for
> /hbase/.logs/cdh-datanode3,60020,1398822403800/cdh-datanode3%2C60020%2C1398822403800.1398827686678
> 2014-04-30 12:33:26,138 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on info in region .META.,,1.1028785192
>
>
>
>
>
> jingych
>
> From: jingych
> Date: 2014-04-30 10:40
> To: Jean-Marc Spaggiari
> CC: user
> Subject: Re: Re: Java Client Write Data blocked
> Thanks JM!
>
> Master log:
>
> 2014-04-29 16:28:21,973 INFO
> org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the
> region gspt_jbxx,,1398759815278.b94a0838f87658b577a0d94379a9e02d. that was
> online on cdh-datanode3,60020,1397641384262
> 2014-04-29 16:28:22,100 INFO
> org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the
> region
> gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398759815278.335a93356b11cda4d9729f6d59e71010.
> that was online on cdh-datanode2,60020,1397641384300
> 2014-04-29 16:29:46,328 INFO
> org.apache.hadoop.hbase.master.handler.SplitRegionHandler: Handled SPLIT
> event; parent=gspt_jbxx,,1398759815278.b94a0838f87658b577a0d94379a9e02d.
> daughter
> a=gspt_jbxx,,1398760182660.74250af18ea58894f5317b0363108eca.daughter
> b=gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760182660.0881f89847f5e9d61ae2d78c85936410.
> 2014-04-29 16:29:57,470 INFO
> org.apache.hadoop.hbase.master.handler.SplitRegionHandler: Handled SPLIT
> event;
> parent=gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398759815278.335a93356b11cda4d9729f6d59e71010.
> daughter
> a=gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398760191728.fc00a39545029397ae823c65109eaa96.daughter
> b=gspt_jbxx,bfe29ea0-5506-4166-8d6d-cece1c1242ee,1398760191728.44137a868fe92120dbe79dfa6f3b4e25.
> 2014-04-29 16:33:17,740 INFO org.apache.hadoop.hbase.master.LoadBalancer:
> Skipping load balancing because balanced cluster; servers=3 regions=1
> average=0.33333334 mostloaded=1 leastloaded=0
> 2014-04-29 16:33:17,740 INFO org.apache.hadoop.hbase.master.LoadBalancer:
> Skipping load balancing because balanced cluster; servers=3 regions=9
> average=3.0 mostloaded=3 leastloaded=3
>
> "This is most probably not what caused your writes to be stuck."
> You mean this is the normal action, this wouldn't cause the process
> blocked long time?
>
> Best Regards!
>
>
>
>
> jingych
>
> From: Jean-Marc Spaggiari
> Date: 2014-04-30 10:11
> To: jingych
> CC: user
> Subject: Re: Re: Java Client Write Data blocked
> this is the piece of code related to your stack trace:
>
>               // check periodically to see if a system stop is requested
>               if (Store.closeCheckInterval > 0) {
>                 bytesWritten += kv.getLength();
>                 if (bytesWritten > Store.closeCheckInterval) {
>                   bytesWritten = 0;
>                   isInterrupted(store, writer);
>                 }
>               }
>
>
>
> It seems something asked the region to close. Can you check the same time
> in the master to see if the balancer get started? Your region was
> compacting. And I think that if balancer start while region is compacting,
> balancer ask the compaction to stop.
>
>
> This is most probably not what caused your writes to be stuck.
>
>
> JM
>
>
>
>
> 2014-04-29 21:56 GMT-04:00 jingych <ji...@neusoft.com>:
>
> I found the compaction action in the region server log file:
>
> 2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Finished memstore flush of ~41.8 M/43820608, currentsize=13.9 M/14607128
> for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in
> 2325ms, sequenceid=819260, compaction requested=true
> 2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Starting compaction on i in region
> gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
> 2014-04-29 16:23:25,374 INFO org.apache.hadoop.hbase.regionserver.Store:
> Starting compaction of 4 file(s) in i of
> gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. into
> tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp,
> seqid=819260, totalSize=120.0 M
> 2014-04-29 16:23:25,387 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter
> type for
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426:
> CompoundBloomFilterWriter
> 2014-04-29 16:23:31,431 INFO
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush of region
> gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. due to global
> heap pressure
> 2014-04-29 16:23:31,471 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter
> type for
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b:
> CompoundBloomFilterWriter
> 2014-04-29 16:23:35,094 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
> DeleteFamily was added to HFile
> (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b)
> 2014-04-29 16:23:35,094 INFO org.apache.hadoop.hbase.regionserver.Store:
> Flushed , sequenceid=819276, memsize=41.8 M, into tmp file
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b
> 2014-04-29 16:23:35,116 INFO org.apache.hadoop.hbase.regionserver.Store:
> Added
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/i/18d58013c52f413497aca2ff1fd50c6b,
> entries=204648, sequenceid=819276, filesize=14.7 M
> 2014-04-29 16:23:35,131 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Finished memstore flush of ~41.8 M/43820792, currentsize=16.7 M/17528408
> for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in
> 3700ms, sequenceid=819276, compaction requested=false
> 2014-04-29 16:23:35,278 INFO
> org.apache.hadoop.hbase.regionserver.SplitTransaction: Starting split of
> region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
> 2014-04-29 16:23:36,622 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
> DeleteFamily was added to HFile
> (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426)
> 2014-04-29 16:23:36,626 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
> DeleteFamily was added to HFile
> (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426)
> 2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> compaction interrupted
> java.io.InterruptedIOException: Aborting compaction of store i in region
> gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. because it was
> interrupted.
> at
> org.apache.hadoop.hbase.regionserver.Compactor.isInterrupted(Compactor.java:230)
> at
> org.apache.hadoop.hbase.regionserver.Compactor.compact(Compactor.java:203)
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:1081)
> at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1336)
> at
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:303)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> 2014-04-29 16:23:36,627 INFO org.apache.hadoop.hbase.regionserver.HRegion:
> Running close preflush of
> gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
> 2014-04-29 16:23:36,628 INFO
> org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: aborted
> compaction:
> regionName=gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.,
> storeName=i, fileCount=4, fileSize=120.0 M (75.9 M, 14.7 M, 14.7 M, 14.7
> M), priority=3, time=2838185051945976; duration=11sec
> 2014-04-29 16:23:36,647 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter
> type for
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6:
> CompoundBloomFilterWriter
> 2014-04-29 16:23:37,694 INFO
> org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO
> DeleteFamily was added to HFile
> (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6)
> 2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.Store:
> Flushed , sequenceid=819286, memsize=25.1 M, into tmp file
> hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6
>
> Does it normal with the InterruptedIOException?
>
> Thanks!
>
>
>
>
> 井玉成
>
> 基础软件事业部
> 东软集团股份有限公司
> 手机:13889491801
> 电话:0411-84835702
>
> 大连市甘井子区黄浦路901号 D1座217室
> Postcode:116085
> Email:jingych@neusoft.com
>
> From: jingych
> Date: 2014-04-30 09:50
> To: Jean-Marc Spaggiari; user
> Subject: Re: Re: Java Client Write Data blocked
> Thanks JM.
>
> But the log is too big, How can I post the log file?
>
> The query from HBase is slower too.
>
> The Network is OK, I'm sure.
>
> Does GC have the log file ? And how to know the swap?
>
> Sorry, I'm rookie.
>
>
>
>
> jingych
>
> From: Jean-Marc Spaggiari
> Date: 2014-04-30 09:30
> To: user; jingych
> Subject: Re: Java Client Write Data blocked
> Any logs?
>
>
> Gargabe collection on the server side? Network issue? Swap?
>
> Please share your master and region servers logs so we can provide
> feedback.
>
> JM
>
>
>
>
> 2014-04-29 21:26 GMT-04:00 jingych <ji...@neusoft.com>:
>
> Hi, All!
>
> I need help!
>
> I run the java client to write 3million data into HBase,
>
> but when wrote almost 1 million, the process was blocked without any
> exception.
>
> Does anyone know the possible reason? So i can find the solution.
>
> Thanks All!
>
> By the way, the HBase Version is 0.94.6-cdh4.5.0!
>
> Thanks again!
>
>
>
>
> jingych
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>



-- 
Adrien Mogenet
http://www.borntosegfault.com

Re: Re: Java Client Write Data blocked

Posted by jingych <ji...@neusoft.com>.
Thanks JM! 

Sorry for reply you so late!

I set the client buffer with 2M, and execute with table#put(Put).  Is this a batch commit?

We have 3 nodes for test with 8G memory.
And the Region Heap Size is the default - 256M. 
This config will cause the one thread blocked.

Then I increase the region heap size to 2g, and the 3 write thread blocked.


Best Regards!




井玉成

基础软件事业部
东软集团股份有限公司
手机:13889491801
电话:0411-84835702
 
大连市甘井子区黄浦路901号 D1座217室
Postcode:116085
Email:jingych@neusoft.com

From: Jean-Marc Spaggiari
Date: 2014-04-30 20:04
To: jingych
CC: user
Subject: Re: Re: Java Client Write Data blocked
Compactions are "normal".


How to you write your data into HBase?

- put(Put)

- put(<Puts>)

- batch(<Puts>)

- Bulk Loading

- etc.


What is your HBase configuration?

- Heap size?

- Regions sizes?

- etc.


How much memory do you have on each node?
How many nodes do you have?


also, you have nothing on the logs between 2014-04-30 10:17:11,961  and 2014-04-30 11:10:14,842?




2014-04-30 2:19 GMT-04:00 jingych <ji...@neusoft.com>:

Hi,JM!

I found the {responseTooSlow} info, And the region do the "Starting compaction ", "Delete Family Bloom filter ", "Renaming compacted file " and "Completed major compaction " again and again!

Could you please tell me what's going on?

Thanks!

2014-04-30 10:16:37,044 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d: CompoundBloomFilterWriter
2014-04-30 10:16:40,234 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d) 
2014-04-30 10:16:40,234 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=834174, memsize=65.2 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d
2014-04-30 10:16:40,264 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/i/9de4b874dfbc4d5292fce37be162fd1d, entries=319488, sequenceid=834174, filesize=22.9 M
2014-04-30 10:16:40,266 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~65.2 M/68412616, currentsize=2.5 M/2622152 for region gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb. in 3276ms, sequenceid=834174, compaction requested=true
2014-04-30 10:16:40,266 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on i in region gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb.
2014-04-30 10:16:40,267 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3 file(s) in i of gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb. into tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp, seqid=834174, totalSize=161.7 M
2014-04-30 10:16:40,281 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924: CompoundBloomFilterWriter
2014-04-30 10:17:11,854 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924) 
2014-04-30 10:17:11,884 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924 to hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/i/87d8063d5b0c4e2fbf1196613cad6924
2014-04-30 10:17:11,960 INFO org.apache.hadoop.hbase.regionserver.Store: Completed compaction of 3 file(s) in i of gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb. into 87d8063d5b0c4e2fbf1196613cad6924, size=161.7 M; total size for store is 664.3 M
2014-04-30 10:17:11,961 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb., storeName=i, fileCount=3, fileSize=161.7 M, priority=3, time=2853849519897449; duration=31sec
2014-04-30 11:10:14,842 WARN org.apache.hadoop.ipc.HBaseServer: (responseTooSlow): {"processingtimems":11689,"call":"next(8855097496052270204, 1024), rpc version=1, client version=29, methodsFingerPrint=-1368823753","client":"192.168.196.214:34626","starttimems":1398827403069,"queuetimems":0,"class":"HRegionServer","responsesize":6,"method":"next"}
2014-04-30 11:11:02,658 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 7192967863259784397 lease expired on region test_cdc,279790,1396578140253.0d3d27285aa872ec855d282c1705811a.
2014-04-30 11:11:02,674 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -9210070512105072073 lease expired on region test_cdc,460072a7-0637-4d73-bc85-38cf78b05ec1,1396579225171.4bf9a540ba138b9cf944bac2110085be.
2014-04-30 11:11:02,719 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -2077930805323086067 lease expired on region test_cdc,9dccc20f-a1b0-4455-8cbf-d09023f8d28e,1396578743616.8b698daab6cd3975bb0ddb557fefcca9.
2014-04-30 11:12:29,999 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 7998402181053931475 lease expired on region test_cdc,279790,1396578140253.0d3d27285aa872ec855d282c1705811a.
2014-04-30 11:12:30,014 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 2545895780242061091 lease expired on region test_cdc,460072a7-0637-4d73-bc85-38cf78b05ec1,1396579225171.4bf9a540ba138b9cf944bac2110085be.
2014-04-30 11:12:30,058 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 3936133421604150523 lease expired on region test_cdc,9dccc20f-a1b0-4455-8cbf-d09023f8d28e,1396578743616.8b698daab6cd3975bb0ddb557fefcca9.
2014-04-30 11:14:46,707 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Roll /hbase/.logs/cdh-datanode3,60020,1398822403800/cdh-datanode3%2C60020%2C1398822403800.1398824086564, entries=822, filesize=73174240.  for /hbase/.logs/cdh-datanode3,60020,1398822403800/cdh-datanode3%2C60020%2C1398822403800.1398827686678
2014-04-30 12:33:26,138 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on info in region .META.,,1.1028785192





jingych

From: jingych
Date: 2014-04-30 10:40
To: Jean-Marc Spaggiari
CC: user
Subject: Re: Re: Java Client Write Data blocked
Thanks JM!

Master log:

2014-04-29 16:28:21,973 INFO org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the region gspt_jbxx,,1398759815278.b94a0838f87658b577a0d94379a9e02d. that was online on cdh-datanode3,60020,1397641384262
2014-04-29 16:28:22,100 INFO org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the region gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398759815278.335a93356b11cda4d9729f6d59e71010. that was online on cdh-datanode2,60020,1397641384300
2014-04-29 16:29:46,328 INFO org.apache.hadoop.hbase.master.handler.SplitRegionHandler: Handled SPLIT event; parent=gspt_jbxx,,1398759815278.b94a0838f87658b577a0d94379a9e02d. daughter a=gspt_jbxx,,1398760182660.74250af18ea58894f5317b0363108eca.daughter b=gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760182660.0881f89847f5e9d61ae2d78c85936410.
2014-04-29 16:29:57,470 INFO org.apache.hadoop.hbase.master.handler.SplitRegionHandler: Handled SPLIT event; parent=gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398759815278.335a93356b11cda4d9729f6d59e71010. daughter a=gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398760191728.fc00a39545029397ae823c65109eaa96.daughter b=gspt_jbxx,bfe29ea0-5506-4166-8d6d-cece1c1242ee,1398760191728.44137a868fe92120dbe79dfa6f3b4e25.
2014-04-29 16:33:17,740 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=3 regions=1 average=0.33333334 mostloaded=1 leastloaded=0
2014-04-29 16:33:17,740 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=3 regions=9 average=3.0 mostloaded=3 leastloaded=3

"This is most probably not what caused your writes to be stuck." 
You mean this is the normal action, this wouldn't cause the process blocked long time?

Best Regards!




jingych

From: Jean-Marc Spaggiari
Date: 2014-04-30 10:11
To: jingych
CC: user
Subject: Re: Re: Java Client Write Data blocked
this is the piece of code related to your stack trace:

              // check periodically to see if a system stop is requested
              if (Store.closeCheckInterval > 0) {
                bytesWritten += kv.getLength();
                if (bytesWritten > Store.closeCheckInterval) {
                  bytesWritten = 0;
                  isInterrupted(store, writer);
                }
              }



It seems something asked the region to close. Can you check the same time in the master to see if the balancer get started? Your region was compacting. And I think that if balancer start while region is compacting, balancer ask the compaction to stop.


This is most probably not what caused your writes to be stuck.


JM




2014-04-29 21:56 GMT-04:00 jingych <ji...@neusoft.com>:

I found the compaction action in the region server log file:

2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820608, currentsize=13.9 M/14607128 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 2325ms, sequenceid=819260, compaction requested=true 
2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:25,374 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 4 file(s) in i of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. into tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp, seqid=819260, totalSize=120.0 M
2014-04-29 16:23:25,387 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426: CompoundBloomFilterWriter
2014-04-29 16:23:31,431 INFO org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. due to global heap pressure
2014-04-29 16:23:31,471 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b: CompoundBloomFilterWriter
2014-04-29 16:23:35,094 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b) 
2014-04-29 16:23:35,094 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819276, memsize=41.8 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b
2014-04-29 16:23:35,116 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/i/18d58013c52f413497aca2ff1fd50c6b, entries=204648, sequenceid=819276, filesize=14.7 M
2014-04-29 16:23:35,131 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820792, currentsize=16.7 M/17528408 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 3700ms, sequenceid=819276, compaction requested=false
2014-04-29 16:23:35,278 INFO org.apache.hadoop.hbase.regionserver.SplitTransaction: Starting split of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:36,622 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426) 
2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426) 
2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.HRegion: compaction interrupted
java.io.InterruptedIOException: Aborting compaction of store i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. because it was interrupted.
at org.apache.hadoop.hbase.regionserver.Compactor.isInterrupted(Compactor.java:230)
at org.apache.hadoop.hbase.regionserver.Compactor.compact(Compactor.java:203)
at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:1081)
at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1336)
at org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:303)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2014-04-29 16:23:36,627 INFO org.apache.hadoop.hbase.regionserver.HRegion: Running close preflush of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:36,628 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: aborted compaction: regionName=gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b., storeName=i, fileCount=4, fileSize=120.0 M (75.9 M, 14.7 M, 14.7 M, 14.7 M), priority=3, time=2838185051945976; duration=11sec
2014-04-29 16:23:36,647 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6: CompoundBloomFilterWriter
2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6) 
2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819286, memsize=25.1 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6

Does it normal with the InterruptedIOException?

Thanks!




井玉成

基础软件事业部
东软集团股份有限公司
手机:13889491801
电话:0411-84835702

大连市甘井子区黄浦路901号 D1座217室
Postcode:116085
Email:jingych@neusoft.com

From: jingych
Date: 2014-04-30 09:50
To: Jean-Marc Spaggiari; user
Subject: Re: Re: Java Client Write Data blocked
Thanks JM.

But the log is too big, How can I post the log file?

The query from HBase is slower too. 

The Network is OK, I'm sure.

Does GC have the log file ? And how to know the swap?

Sorry, I'm rookie.




jingych

From: Jean-Marc Spaggiari
Date: 2014-04-30 09:30
To: user; jingych
Subject: Re: Java Client Write Data blocked
Any logs? 


Gargabe collection on the server side? Network issue? Swap?

Please share your master and region servers logs so we can provide feedback.

JM




2014-04-29 21:26 GMT-04:00 jingych <ji...@neusoft.com>:

Hi, All!

I need help!

I run the java client to write 3million data into HBase,

but when wrote almost 1 million, the process was blocked without any exception.

Does anyone know the possible reason? So i can find the solution.

Thanks All!

By the way, the HBase Version is 0.94.6-cdh4.5.0!

Thanks again!




jingych
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: Re: Java Client Write Data blocked

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Compactions are "normal".

How to you write your data into HBase?
- put(Put)
- put(<Puts>)
- batch(<Puts>)
- Bulk Loading
- etc.

What is your HBase configuration?
- Heap size?
- Regions sizes?
- etc.

How much memory do you have on each node?
How many nodes do you have?

also, you have nothing on the logs between 2014-04-30 10:17:11,961  and
2014-04-30 11:10:14,842?


2014-04-30 2:19 GMT-04:00 jingych <ji...@neusoft.com>:

>   Hi,JM!
>
> I found the {responseTooSlow} info, And the region do the
> "Starting compaction ", "Delete Family Bloom filter ",
> "Renaming compacted file " and "Completed major compaction " again and
> again!
>
> Could you please tell me what's going on?
>
> Thanks!
>
> 2014-04-30 10
> :16:37,044 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d: CompoundBloomFilterWriter
> 2014-04-30 10
> :16:40,234 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d)
> 2014-04-30 10
> :16:40,234 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=834174, memsize=65.2 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d
> 2014-04-30 10
> :16:40,264 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/i/9de4b874dfbc4d5292fce37be162fd1d, entries=319488, sequenceid=834174, filesize=22.9 M
> 2014-04-30 10
> :16:40,266 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~65.2 M/68412616, currentsize=2.5 M/2622152 for region gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb. in 3276ms, sequenceid=834174, compaction requested=true
> 2014-04-30 10
> :16:40,266 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on i in region gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb.
> 2014-04-30 10
> :16:40,267 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3 file(s) in i of gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb. into tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp, seqid=834174, totalSize=161.7 M
> 2014-04-30 10
> :16:40,281 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924: CompoundBloomFilterWriter
> 2014-04-30 10
> :17:11,854 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924)
> 2014-04-30 10
> :17:11,884 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924 to hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/i/87d8063d5b0c4e2fbf1196613cad6924
> 2014-04-30 10
> :17:11,960 INFO org.apache.hadoop.hbase.regionserver.Store: Completed compaction of 3 file(s) in i of gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb. into 87d8063d5b0c4e2fbf1196613cad6924, size=161.7 M; total size for store is 664.3 M
> 2014-04-30 10
> :17:11,961 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb., storeName=i, fileCount=3, fileSize=161.7 M, priority=3, time=2853849519897449; duration=31sec
>
> 2014-04-30 11:10:14,842 WARN org.apache.hadoop.ipc.HBaseServer: (responseTooSlow): {"processingtimems":11689,"call":"next(8855097496052270204, 1024), rpc version=1, client version=29, methodsFingerPrint=-1368823753","client":"
> 192.168.196.214:34626
> ","starttimems":1398827403069,"queuetimems":0,"class":"HRegionServer","responsesize":6,"method":"next"}
> 2014-04-
> 30 11:11:02,658 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 7192967863259784397 lease expired on region test_cdc,279790,1396578140253.0d3d27285aa872ec855d282c1705811a.
>
> 2014-04-30 11:11:02,674 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -9210070512105072073 lease expired on region test_cdc,460072a7-0637-4d73-bc85-38cf78b05ec1,1396579225171.4bf9a540ba138b9cf944bac2110085be.
>
> 2014-04-30 11:11:02,719 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -2077930805323086067 lease expired on region test_cdc,9dccc20f-a1b0-4455-8cbf-d09023f8d28e,1396578743616.8b698daab6cd3975bb0ddb557fefcca9.
>
> 2014-04-30 11:12:29,999 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 7998402181053931475 lease expired on region test_cdc,279790,1396578140253.0d3d27285aa872ec855d282c1705811a.
>
> 2014-04-30 11:12:30,014 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 2545895780242061091 lease expired on region test_cdc,460072a7-0637-4d73-bc85-38cf78b05ec1,1396579225171.4bf9a540ba138b9cf944bac2110085be.
>
> 2014-04-30 11:12:30,058 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 3936133421604150523 lease expired on region test_cdc,9dccc20f-a1b0-4455-8cbf-d09023f8d28e,1396578743616.8b698daab6cd3975bb0ddb557fefcca9.
>
> 2014-04-30 11:14:46,707 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Roll /hbase/.logs/cdh-datanode3,60020,1398822403800/cdh-datanode3%2C60020%2C1398822403800.1398824086564, entries=822, filesize=73174240.  for /hbase/.logs/cdh-datanode3,60020,1398822403800/cdh-datanode3%2C60020%2C1398822403800.1398827686678
>
> 2014-04-30 12:33:26,138 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on info in region .META.,,1.1028785192
>
>
> ------------------------------
>
> jingych
>
>  *From:* jingych <ji...@neusoft.com>
> *Date:* 2014-04-30 10:40
> *To:* Jean-Marc Spaggiari <je...@spaggiari.org>
> *CC:* user <us...@hbase.apache.org>
> *Subject:* Re: Re: Java Client Write Data blocked
>  Thanks JM!
>
> Master log:
>
>
> 2014-04-29 16:28:21,973 INFO org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the region gspt_jbxx,,1398759815278.b94a0838f87658b577a0d94379a9e02d. that was online on cdh-datanode3,60020,1397641384262
>
> 2014-04-29 16:28:22,100 INFO org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the region gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398759815278.335a93356b11cda4d9729f6d59e71010. that was online on cdh-datanode2,60020,1397641384300
>
> 2014-04-29 16:29:46,328 INFO org.apache.hadoop.hbase.master.handler.SplitRegionHandler: Handled SPLIT event; parent=gspt_jbxx,,1398759815278.b94a0838f87658b577a0d94379a9e02d. daughter a=gspt_jbxx,,1398760182660.74250af18ea58894f5317b0363108eca.daughter b=gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760182660.0881f89847f5e9d61ae2d78c85936410.
>
> 2014-04-29 16:29:57,470 INFO org.apache.hadoop.hbase.master.handler.SplitRegionHandler: Handled SPLIT event; parent=gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398759815278.335a93356b11cda4d9729f6d59e71010. daughter a=gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398760191728.fc00a39545029397ae823c65109eaa96.daughter b=gspt_jbxx,bfe29ea0-5506-4166-8d6d-cece1c1242ee,1398760191728.44137a868fe92120dbe79dfa6f3b4e25.
>
> 2014-04-29 16:33:17,740 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=3 regions=1 average=0.33333334 mostloaded=1 leastloaded=0
>
> 2014-04-29 16:33:17,740 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=3 regions=9 average=3.0 mostloaded=3 leastloaded=3
>
> "This is most probably not what caused your writes to be stuck."
>
> You mean this is the normal action, this wouldn't cause the process blocked long time?
>
> Best Regards!
>
>
>
>
> jingych
>
> From: Jean-Marc Spaggiari
> Date: 2014-04-30 10:11
> To: jingych
> CC: user
> Subject: Re: Re: Java Client Write Data blocked
> this is the piece of code related to your stack trace:
>
>               // check periodically to see if a system stop is requested
>               if (Store.closeCheckInterval > 0) {
>                 bytesWritten += kv.getLength();
>                 if (bytesWritten > Store.closeCheckInterval) {
>                   bytesWritten = 0;
>                   isInterrupted(store, writer);
>                 }
>               }
>
>
>
>
> It seems something asked the region to close. Can you check the same time in the master to see if the balancer get started? Your region was compacting. And I think that if balancer start while region is compacting, balancer ask the compaction to stop.
>
>
> This is most probably not what caused your writes to be stuck.
>
>
> JM
>
>
>
>
> 2014-04-29 21:56 GMT-04:00 jingych <ji...@neusoft.com>:
>
> I found the compaction action in the region server log file:
>
>
> 2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820608, currentsize=13.9 M/14607128 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 2325ms, sequenceid=819260, compaction requested=true
>
> 2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
>
> 2014-04-29 16:23:25,374 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 4 file(s) in i of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. into tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp, seqid=819260, totalSize=120.0 M
>
> 2014-04-29 16:23:25,387 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426: CompoundBloomFilterWriter
>
> 2014-04-29 16:23:31,431 INFO org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. due to global heap pressure
>
> 2014-04-29 16:23:31,471 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b: CompoundBloomFilterWriter
>
> 2014-04-29 16:23:35,094 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b)
>
> 2014-04-29 16:23:35,094 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819276, memsize=41.8 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b
>
> 2014-04-29 16:23:35,116 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/i/18d58013c52f413497aca2ff1fd50c6b, entries=204648, sequenceid=819276, filesize=14.7 M
>
> 2014-04-29 16:23:35,131 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820792, currentsize=16.7 M/17528408 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 3700ms, sequenceid=819276, compaction requested=false
>
> 2014-04-29 16:23:35,278 INFO org.apache.hadoop.hbase.regionserver.SplitTransaction: Starting split of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
>
> 2014-04-29 16:23:36,622 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426)
>
> 2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426)
>
> 2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.HRegion: compaction interrupted
>
> java.io.InterruptedIOException: Aborting compaction of store i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. because it was interrupted.
>
> at org.apache.hadoop.hbase.regionserver.Compactor.isInterrupted(Compactor.java:230)
>
> at org.apache.hadoop.hbase.regionserver.Compactor.compact(Compactor.java:203)
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:1081)
> at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1336)
>
> at org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:303)
>
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
>
> 2014-04-29 16:23:36,627 INFO org.apache.hadoop.hbase.regionserver.HRegion: Running close preflush of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
>
> 2014-04-29 16:23:36,628 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: aborted compaction: regionName=gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b., storeName=i, fileCount=4, fileSize=120.0 M (75.9 M, 14.7 M, 14.7 M, 14.7 M), priority=3, time=2838185051945976; duration=11sec
>
> 2014-04-29 16:23:36,647 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6: CompoundBloomFilterWriter
>
> 2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6)
>
> 2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819286, memsize=25.1 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6
>
> Does it normal with the InterruptedIOException?
>
> Thanks!
>
>
>
>
> 井玉成
>
> 基础软件事业部
> 东软集团股份有限公司
> 手机:13889491801
> 电话:0411-84835702
>
> 大连市甘井子区黄浦路901号 D1座217室
> Postcode:116085
> Email:jingych@neusoft.com
>
> From: jingych
> Date: 2014-04-30 09:50
> To: Jean-Marc Spaggiari; user
> Subject: Re: Re: Java Client Write Data blocked
> Thanks JM.
>
> But the log is too big, How can I post the log file?
>
> The query from HBase is slower too.
>
> The Network is OK, I'm sure.
>
> Does GC have the log file ? And how to know the swap?
>
> Sorry, I'm rookie.
>
>
>
>
> jingych
>
> From: Jean-Marc Spaggiari
> Date: 2014-04-30 09:30
> To: user; jingych
> Subject: Re: Java Client Write Data blocked
> Any logs?
>
>
> Gargabe collection on the server side? Network issue? Swap?
>
>
> Please share your master and region servers logs so we can provide feedback.
>
> JM
>
>
>
>
> 2014-04-29 21:26 GMT-04:00 jingych <ji...@neusoft.com>:
>
> Hi, All!
>
> I need help!
>
> I run the java client to write 3million data into HBase,
>
>
> but when wrote almost 1 million, the process was blocked without any exception.
>
> Does anyone know the possible reason? So i can find the solution.
>
> Thanks All!
>
> By the way, the HBase Version is 0.94.6-cdh4.5.0!
>
> Thanks again!
>
>
>
>
> jingych
>
> ---------------------------------------------------------------------------------------------------
>
> Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
>
> is intended only for the use of the intended recipient and may be confidential and/or privileged of
>
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
>
> not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
>
> is strictly prohibited, and may be unlawful.If you have received this communication in error,please
>
> immediately notify the sender by return e-mail, and delete the original message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
>
> Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
>
> is intended only for the use of the intended recipient and may be confidential and/or privileged of
>
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
>
> not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
>
> is strictly prohibited, and may be unlawful.If you have received this communication in error,please
>
> immediately notify the sender by return e-mail, and delete the original message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
>
> Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
>
> is intended only for the use of the intended recipient and may be confidential and/or privileged of
>
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
>
> not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
>
> is strictly prohibited, and may be unlawful.If you have received this communication in error,please
>
> immediately notify the sender by return e-mail, and delete the original message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
>
> Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
>
> is intended only for the use of the intended recipient and may be confidential and/or privileged of
>
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
>
> not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
>
> is strictly prohibited, and may be unlawful.If you have received this communication in error,please
>
> immediately notify the sender by return e-mail, and delete the original message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Re: Re: Java Client Write Data blocked

Posted by jingych <ji...@neusoft.com>.
Hi,JM!

I found the {responseTooSlow} info, And the region do the "Starting compaction ", "Delete Family Bloom filter ", "Renaming compacted file " and "Completed major compaction " again and again!

Could you please tell me what's going on?

Thanks!

2014-04-30 10:16:37,044 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d: CompoundBloomFilterWriter
2014-04-30 10:16:40,234 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d) 
2014-04-30 10:16:40,234 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=834174, memsize=65.2 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/9de4b874dfbc4d5292fce37be162fd1d
2014-04-30 10:16:40,264 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/i/9de4b874dfbc4d5292fce37be162fd1d, entries=319488, sequenceid=834174, filesize=22.9 M
2014-04-30 10:16:40,266 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~65.2 M/68412616, currentsize=2.5 M/2622152 for region gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb. in 3276ms, sequenceid=834174, compaction requested=true
2014-04-30 10:16:40,266 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on i in region gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb.
2014-04-30 10:16:40,267 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 3 file(s) in i of gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb. into tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp, seqid=834174, totalSize=161.7 M
2014-04-30 10:16:40,281 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924: CompoundBloomFilterWriter
2014-04-30 10:17:11,854 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924) 
2014-04-30 10:17:11,884 INFO org.apache.hadoop.hbase.regionserver.Store: Renaming compacted file at hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/.tmp/87d8063d5b0c4e2fbf1196613cad6924 to hdfs://cdh-namenode:8020/hbase/gspt_jbxx/ccd326753625e15bf7d573ae82fc70fb/i/87d8063d5b0c4e2fbf1196613cad6924
2014-04-30 10:17:11,960 INFO org.apache.hadoop.hbase.regionserver.Store: Completed compaction of 3 file(s) in i of gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb. into 87d8063d5b0c4e2fbf1196613cad6924, size=161.7 M; total size for store is 664.3 M
2014-04-30 10:17:11,961 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: completed compaction: regionName=gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760474036.ccd326753625e15bf7d573ae82fc70fb., storeName=i, fileCount=3, fileSize=161.7 M, priority=3, time=2853849519897449; duration=31sec
2014-04-30 11:10:14,842 WARN org.apache.hadoop.ipc.HBaseServer: (responseTooSlow): {"processingtimems":11689,"call":"next(8855097496052270204, 1024), rpc version=1, client version=29, methodsFingerPrint=-1368823753","client":"192.168.196.214:34626","starttimems":1398827403069,"queuetimems":0,"class":"HRegionServer","responsesize":6,"method":"next"}
2014-04-30 11:11:02,658 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 7192967863259784397 lease expired on region test_cdc,279790,1396578140253.0d3d27285aa872ec855d282c1705811a.
2014-04-30 11:11:02,674 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -9210070512105072073 lease expired on region test_cdc,460072a7-0637-4d73-bc85-38cf78b05ec1,1396579225171.4bf9a540ba138b9cf944bac2110085be.
2014-04-30 11:11:02,719 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner -2077930805323086067 lease expired on region test_cdc,9dccc20f-a1b0-4455-8cbf-d09023f8d28e,1396578743616.8b698daab6cd3975bb0ddb557fefcca9.
2014-04-30 11:12:29,999 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 7998402181053931475 lease expired on region test_cdc,279790,1396578140253.0d3d27285aa872ec855d282c1705811a.
2014-04-30 11:12:30,014 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 2545895780242061091 lease expired on region test_cdc,460072a7-0637-4d73-bc85-38cf78b05ec1,1396579225171.4bf9a540ba138b9cf944bac2110085be.
2014-04-30 11:12:30,058 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: Scanner 3936133421604150523 lease expired on region test_cdc,9dccc20f-a1b0-4455-8cbf-d09023f8d28e,1396578743616.8b698daab6cd3975bb0ddb557fefcca9.
2014-04-30 11:14:46,707 INFO org.apache.hadoop.hbase.regionserver.wal.HLog: Roll /hbase/.logs/cdh-datanode3,60020,1398822403800/cdh-datanode3%2C60020%2C1398822403800.1398824086564, entries=822, filesize=73174240.  for /hbase/.logs/cdh-datanode3,60020,1398822403800/cdh-datanode3%2C60020%2C1398822403800.1398827686678
2014-04-30 12:33:26,138 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on info in region .META.,,1.1028785192





jingych

From: jingych
Date: 2014-04-30 10:40
To: Jean-Marc Spaggiari
CC: user
Subject: Re: Re: Java Client Write Data blocked
Thanks JM!

Master log:

2014-04-29 16:28:21,973 INFO org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the region gspt_jbxx,,1398759815278.b94a0838f87658b577a0d94379a9e02d. that was online on cdh-datanode3,60020,1397641384262
2014-04-29 16:28:22,100 INFO org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the region gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398759815278.335a93356b11cda4d9729f6d59e71010. that was online on cdh-datanode2,60020,1397641384300
2014-04-29 16:29:46,328 INFO org.apache.hadoop.hbase.master.handler.SplitRegionHandler: Handled SPLIT event; parent=gspt_jbxx,,1398759815278.b94a0838f87658b577a0d94379a9e02d. daughter a=gspt_jbxx,,1398760182660.74250af18ea58894f5317b0363108eca.daughter b=gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760182660.0881f89847f5e9d61ae2d78c85936410.
2014-04-29 16:29:57,470 INFO org.apache.hadoop.hbase.master.handler.SplitRegionHandler: Handled SPLIT event; parent=gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398759815278.335a93356b11cda4d9729f6d59e71010. daughter a=gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398760191728.fc00a39545029397ae823c65109eaa96.daughter b=gspt_jbxx,bfe29ea0-5506-4166-8d6d-cece1c1242ee,1398760191728.44137a868fe92120dbe79dfa6f3b4e25.
2014-04-29 16:33:17,740 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=3 regions=1 average=0.33333334 mostloaded=1 leastloaded=0
2014-04-29 16:33:17,740 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=3 regions=9 average=3.0 mostloaded=3 leastloaded=3

"This is most probably not what caused your writes to be stuck." 
You mean this is the normal action, this wouldn't cause the process blocked long time?

Best Regards!




jingych

From: Jean-Marc Spaggiari
Date: 2014-04-30 10:11
To: jingych
CC: user
Subject: Re: Re: Java Client Write Data blocked
this is the piece of code related to your stack trace:

              // check periodically to see if a system stop is requested
              if (Store.closeCheckInterval > 0) {
                bytesWritten += kv.getLength();
                if (bytesWritten > Store.closeCheckInterval) {
                  bytesWritten = 0;
                  isInterrupted(store, writer);
                }
              }



It seems something asked the region to close. Can you check the same time in the master to see if the balancer get started? Your region was compacting. And I think that if balancer start while region is compacting, balancer ask the compaction to stop.


This is most probably not what caused your writes to be stuck.


JM




2014-04-29 21:56 GMT-04:00 jingych <ji...@neusoft.com>:

I found the compaction action in the region server log file:

2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820608, currentsize=13.9 M/14607128 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 2325ms, sequenceid=819260, compaction requested=true 
2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:25,374 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 4 file(s) in i of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. into tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp, seqid=819260, totalSize=120.0 M
2014-04-29 16:23:25,387 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426: CompoundBloomFilterWriter
2014-04-29 16:23:31,431 INFO org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. due to global heap pressure
2014-04-29 16:23:31,471 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b: CompoundBloomFilterWriter
2014-04-29 16:23:35,094 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b) 
2014-04-29 16:23:35,094 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819276, memsize=41.8 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b
2014-04-29 16:23:35,116 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/i/18d58013c52f413497aca2ff1fd50c6b, entries=204648, sequenceid=819276, filesize=14.7 M
2014-04-29 16:23:35,131 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820792, currentsize=16.7 M/17528408 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 3700ms, sequenceid=819276, compaction requested=false
2014-04-29 16:23:35,278 INFO org.apache.hadoop.hbase.regionserver.SplitTransaction: Starting split of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:36,622 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426) 
2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426) 
2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.HRegion: compaction interrupted
java.io.InterruptedIOException: Aborting compaction of store i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. because it was interrupted.
at org.apache.hadoop.hbase.regionserver.Compactor.isInterrupted(Compactor.java:230)
at org.apache.hadoop.hbase.regionserver.Compactor.compact(Compactor.java:203)
at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:1081)
at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1336)
at org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:303)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2014-04-29 16:23:36,627 INFO org.apache.hadoop.hbase.regionserver.HRegion: Running close preflush of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:36,628 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: aborted compaction: regionName=gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b., storeName=i, fileCount=4, fileSize=120.0 M (75.9 M, 14.7 M, 14.7 M, 14.7 M), priority=3, time=2838185051945976; duration=11sec
2014-04-29 16:23:36,647 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6: CompoundBloomFilterWriter
2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6) 
2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819286, memsize=25.1 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6

Does it normal with the InterruptedIOException?

Thanks!




井玉成

基础软件事业部
东软集团股份有限公司
手机:13889491801
电话:0411-84835702

大连市甘井子区黄浦路901号 D1座217室
Postcode:116085
Email:jingych@neusoft.com

From: jingych
Date: 2014-04-30 09:50
To: Jean-Marc Spaggiari; user
Subject: Re: Re: Java Client Write Data blocked
Thanks JM.

But the log is too big, How can I post the log file?

The query from HBase is slower too. 

The Network is OK, I'm sure.

Does GC have the log file ? And how to know the swap?

Sorry, I'm rookie.




jingych

From: Jean-Marc Spaggiari
Date: 2014-04-30 09:30
To: user; jingych
Subject: Re: Java Client Write Data blocked
Any logs? 


Gargabe collection on the server side? Network issue? Swap?

Please share your master and region servers logs so we can provide feedback.

JM




2014-04-29 21:26 GMT-04:00 jingych <ji...@neusoft.com>:

Hi, All!

I need help!

I run the java client to write 3million data into HBase,

but when wrote almost 1 million, the process was blocked without any exception.

Does anyone know the possible reason? So i can find the solution.

Thanks All!

By the way, the HBase Version is 0.94.6-cdh4.5.0!

Thanks again!




jingych
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: Re: Java Client Write Data blocked

Posted by jingych <ji...@neusoft.com>.
Thanks JM!

Master log:

2014-04-29 16:28:21,973 INFO org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the region gspt_jbxx,,1398759815278.b94a0838f87658b577a0d94379a9e02d. that was online on cdh-datanode3,60020,1397641384262
2014-04-29 16:28:22,100 INFO org.apache.hadoop.hbase.master.AssignmentManager: The master has opened the region gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398759815278.335a93356b11cda4d9729f6d59e71010. that was online on cdh-datanode2,60020,1397641384300
2014-04-29 16:29:46,328 INFO org.apache.hadoop.hbase.master.handler.SplitRegionHandler: Handled SPLIT event; parent=gspt_jbxx,,1398759815278.b94a0838f87658b577a0d94379a9e02d. daughter a=gspt_jbxx,,1398760182660.74250af18ea58894f5317b0363108eca.daughter b=gspt_jbxx,40b42714-f8b7-49ce-a425-cc1ae340a903,1398760182660.0881f89847f5e9d61ae2d78c85936410.
2014-04-29 16:29:57,470 INFO org.apache.hadoop.hbase.master.handler.SplitRegionHandler: Handled SPLIT event; parent=gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398759815278.335a93356b11cda4d9729f6d59e71010. daughter a=gspt_jbxx,81526a4d-fcd5-4edd-9cb2-b28cd138d8ad,1398760191728.fc00a39545029397ae823c65109eaa96.daughter b=gspt_jbxx,bfe29ea0-5506-4166-8d6d-cece1c1242ee,1398760191728.44137a868fe92120dbe79dfa6f3b4e25.
2014-04-29 16:33:17,740 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=3 regions=1 average=0.33333334 mostloaded=1 leastloaded=0
2014-04-29 16:33:17,740 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=3 regions=9 average=3.0 mostloaded=3 leastloaded=3

"This is most probably not what caused your writes to be stuck." 
You mean this is the normal action, this wouldn't cause the process blocked long time?

Best Regards!




jingych

From: Jean-Marc Spaggiari
Date: 2014-04-30 10:11
To: jingych
CC: user
Subject: Re: Re: Java Client Write Data blocked
this is the piece of code related to your stack trace:

              // check periodically to see if a system stop is requested
              if (Store.closeCheckInterval > 0) {
                bytesWritten += kv.getLength();
                if (bytesWritten > Store.closeCheckInterval) {
                  bytesWritten = 0;
                  isInterrupted(store, writer);
                }
              }



It seems something asked the region to close. Can you check the same time in the master to see if the balancer get started? Your region was compacting. And I think that if balancer start while region is compacting, balancer ask the compaction to stop.


This is most probably not what caused your writes to be stuck.


JM




2014-04-29 21:56 GMT-04:00 jingych <ji...@neusoft.com>:

I found the compaction action in the region server log file:

2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820608, currentsize=13.9 M/14607128 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 2325ms, sequenceid=819260, compaction requested=true 
2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:25,374 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 4 file(s) in i of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. into tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp, seqid=819260, totalSize=120.0 M
2014-04-29 16:23:25,387 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426: CompoundBloomFilterWriter
2014-04-29 16:23:31,431 INFO org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. due to global heap pressure
2014-04-29 16:23:31,471 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b: CompoundBloomFilterWriter
2014-04-29 16:23:35,094 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b) 
2014-04-29 16:23:35,094 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819276, memsize=41.8 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b
2014-04-29 16:23:35,116 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/i/18d58013c52f413497aca2ff1fd50c6b, entries=204648, sequenceid=819276, filesize=14.7 M
2014-04-29 16:23:35,131 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820792, currentsize=16.7 M/17528408 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 3700ms, sequenceid=819276, compaction requested=false
2014-04-29 16:23:35,278 INFO org.apache.hadoop.hbase.regionserver.SplitTransaction: Starting split of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:36,622 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426) 
2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426) 
2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.HRegion: compaction interrupted
java.io.InterruptedIOException: Aborting compaction of store i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. because it was interrupted.
at org.apache.hadoop.hbase.regionserver.Compactor.isInterrupted(Compactor.java:230)
at org.apache.hadoop.hbase.regionserver.Compactor.compact(Compactor.java:203)
at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:1081)
at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1336)
at org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:303)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2014-04-29 16:23:36,627 INFO org.apache.hadoop.hbase.regionserver.HRegion: Running close preflush of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:36,628 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: aborted compaction: regionName=gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b., storeName=i, fileCount=4, fileSize=120.0 M (75.9 M, 14.7 M, 14.7 M, 14.7 M), priority=3, time=2838185051945976; duration=11sec
2014-04-29 16:23:36,647 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6: CompoundBloomFilterWriter
2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6) 
2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819286, memsize=25.1 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6

Does it normal with the InterruptedIOException?

Thanks!




井玉成

基础软件事业部
东软集团股份有限公司
手机:13889491801
电话:0411-84835702

大连市甘井子区黄浦路901号 D1座217室
Postcode:116085
Email:jingych@neusoft.com

From: jingych
Date: 2014-04-30 09:50
To: Jean-Marc Spaggiari; user
Subject: Re: Re: Java Client Write Data blocked
Thanks JM.

But the log is too big, How can I post the log file?

The query from HBase is slower too. 

The Network is OK, I'm sure.

Does GC have the log file ? And how to know the swap?

Sorry, I'm rookie.




jingych

From: Jean-Marc Spaggiari
Date: 2014-04-30 09:30
To: user; jingych
Subject: Re: Java Client Write Data blocked
Any logs? 


Gargabe collection on the server side? Network issue? Swap?

Please share your master and region servers logs so we can provide feedback.

JM




2014-04-29 21:26 GMT-04:00 jingych <ji...@neusoft.com>:

Hi, All!

I need help!

I run the java client to write 3million data into HBase,

but when wrote almost 1 million, the process was blocked without any exception.

Does anyone know the possible reason? So i can find the solution.

Thanks All!

By the way, the HBase Version is 0.94.6-cdh4.5.0!

Thanks again!




jingych
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: Re: Java Client Write Data blocked

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
this is the piece of code related to your stack trace:

              // check periodically to see if a system stop is requested
              if (Store.closeCheckInterval > 0) {
                bytesWritten += kv.getLength();
                if (bytesWritten > Store.closeCheckInterval) {
                  bytesWritten = 0;
                  isInterrupted(store, writer);
                }
              }


It seems something asked the region to close. Can you check the same time
in the master to see if the balancer get started? Your region was
compacting. And I think that if balancer start while region is compacting,
balancer ask the compaction to stop.

This is most probably not what caused your writes to be stuck.

JM


2014-04-29 21:56 GMT-04:00 jingych <ji...@neusoft.com>:

>  I found the compaction action in the region server log file:
>
> 2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820608, currentsize=13.9 M/14607128 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 2325ms, sequenceid=819260, compaction requested=true
>
> 2014-04-29 16
> :23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
> 2014-04-29 16
> :23:25,374 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 4 file(s) in i of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. into tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp, seqid=819260, totalSize=120.0 M
> 2014-04-29 16
> :23:25,387 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426: CompoundBloomFilterWriter
> 2014-04-29 16
> :23:31,431 INFO org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. due to global heap pressure
> 2014-04-29 16
> :23:31,471 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b: CompoundBloomFilterWriter
> 2014-04-29 16
> :23:35,094 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b)
> 2014-04-29 16
> :23:35,094 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819276, memsize=41.8 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b
> 2014-04-29 16
> :23:35,116 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/i/18d58013c52f413497aca2ff1fd50c6b, entries=204648, sequenceid=819276, filesize=14.7 M
> 2014-04-29 16
> :23:35,131 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820792, currentsize=16.7 M/17528408 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 3700ms, sequenceid=819276, compaction requested=false
> 2014-04-29 16
> :23:35,278 INFO org.apache.hadoop.hbase.regionserver.SplitTransaction: Starting split of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
> 2014-04-29 16
> :23:36,622 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426)
>
> 2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426)
>
> 2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.HRegion: compaction interrupted
>
> java.io.InterruptedIOException: Aborting compaction of store i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. because it was interrupted.
>
> at org.apache.hadoop.hbase.regionserver.Compactor.isInterrupted(Compactor.java:230)
>
> at org.apache.hadoop.hbase.regionserver.Compactor.compact(Compactor.java:203)
> at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:1081)
> at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1336)
>
> at org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:303)
>
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
>
> 2014-04-29 16:23:36,627 INFO org.apache.hadoop.hbase.regionserver.HRegion: Running close preflush of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
>
> 2014-04-29 16:23:36,628 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: aborted compaction: regionName=gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b., storeName=i, fileCount=4, fileSize=120.0 M (75.9 M, 14.7 M, 14.7 M, 14.7 M), priority=3, time=2838185051945976; duration=11sec
>
> 2014-04-29 16:23:36,647 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6: CompoundBloomFilterWriter
>
> 2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6)
>
> 2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819286, memsize=25.1 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6
>
> Does it normal with the InterruptedIOException?
>
> Thanks!
>
> ------------------------------
>
> 井玉成
>
>
>
> 基础软件事业部
>
> 东软集团股份有限公司
>
> 手机:13889491801
>
> 电话:0411-84835702
>
>
>
> 大连市甘井子区黄浦路901号 D1座217室
>
> Postcode:116085
>
> Email:jingych@neusoft.com
>
>  *From:* jingych <ji...@neusoft.com>
> *Date:* 2014-04-30 09:50
> *To:* Jean-Marc Spaggiari <je...@spaggiari.org>; user<us...@hbase.apache.org>
> *Subject:* Re: Re: Java Client Write Data blocked
>  Thanks JM.
>
> But the log is too big, How can I post the log file?
>
> The query from HBase is slower too.
>
> The Network is OK, I'm sure.
>
> Does GC have the log file ? And how to know the swap?
>
> Sorry, I'm rookie.
>
>
>
>
> jingych
>
> From: Jean-Marc Spaggiari
> Date: 2014-04-30 09:30
> To: user; jingych
> Subject: Re: Java Client Write Data blocked
> Any logs?
>
>
> Gargabe collection on the server side? Network issue? Swap?
>
>
> Please share your master and region servers logs so we can provide feedback.
>
> JM
>
>
>
>
> 2014-04-29 21:26 GMT-04:00 jingych <ji...@neusoft.com>:
>
> Hi, All!
>
> I need help!
>
> I run the java client to write 3million data into HBase,
>
>
> but when wrote almost 1 million, the process was blocked without any exception.
>
> Does anyone know the possible reason? So i can find the solution.
>
> Thanks All!
>
> By the way, the HBase Version is 0.94.6-cdh4.5.0!
>
> Thanks again!
>
>
>
>
> jingych
>
> ---------------------------------------------------------------------------------------------------
>
> Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
>
> is intended only for the use of the intended recipient and may be confidential and/or privileged of
>
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
>
> not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
>
> is strictly prohibited, and may be unlawful.If you have received this communication in error,please
>
> immediately notify the sender by return e-mail, and delete the original message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------------
>
> Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
>
> is intended only for the use of the intended recipient and may be confidential and/or privileged of
>
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
>
> not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
>
> is strictly prohibited, and may be unlawful.If you have received this communication in error,please
>
> immediately notify the sender by return e-mail, and delete the original message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Re: Re: Java Client Write Data blocked

Posted by jingych <ji...@neusoft.com>.
I found the compaction action in the region server log file:

2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820608, currentsize=13.9 M/14607128 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 2325ms, sequenceid=819260, compaction requested=true
2014-04-29 16:23:25,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:25,374 INFO org.apache.hadoop.hbase.regionserver.Store: Starting compaction of 4 file(s) in i of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. into tmpdir=hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp, seqid=819260, totalSize=120.0 M
2014-04-29 16:23:25,387 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426: CompoundBloomFilterWriter
2014-04-29 16:23:31,431 INFO org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Flush of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. due to global heap pressure
2014-04-29 16:23:31,471 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b: CompoundBloomFilterWriter
2014-04-29 16:23:35,094 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b) 
2014-04-29 16:23:35,094 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819276, memsize=41.8 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/18d58013c52f413497aca2ff1fd50c6b
2014-04-29 16:23:35,116 INFO org.apache.hadoop.hbase.regionserver.Store: Added hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/i/18d58013c52f413497aca2ff1fd50c6b, entries=204648, sequenceid=819276, filesize=14.7 M
2014-04-29 16:23:35,131 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished memstore flush of ~41.8 M/43820792, currentsize=16.7 M/17528408 for region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. in 3700ms, sequenceid=819276, compaction requested=false
2014-04-29 16:23:35,278 INFO org.apache.hadoop.hbase.regionserver.SplitTransaction: Starting split of region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:36,622 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426) 
2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/9bdbdbccb6c94d978b740765f5e01426) 
2014-04-29 16:23:36,626 INFO org.apache.hadoop.hbase.regionserver.HRegion: compaction interrupted
java.io.InterruptedIOException: Aborting compaction of store i in region gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b. because it was interrupted.
at org.apache.hadoop.hbase.regionserver.Compactor.isInterrupted(Compactor.java:230)
at org.apache.hadoop.hbase.regionserver.Compactor.compact(Compactor.java:203)
at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:1081)
at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1336)
at org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest.run(CompactionRequest.java:303)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2014-04-29 16:23:36,627 INFO org.apache.hadoop.hbase.regionserver.HRegion: Running close preflush of gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b.
2014-04-29 16:23:36,628 INFO org.apache.hadoop.hbase.regionserver.compactions.CompactionRequest: aborted compaction: regionName=gspt_jbxx,,1398754604428.d6fd8d39289985adda9a3e048b92a24b., storeName=i, fileCount=4, fileSize=120.0 M (75.9 M, 14.7 M, 14.7 M, 14.7 M), priority=3, time=2838185051945976; duration=11sec
2014-04-29 16:23:36,647 INFO org.apache.hadoop.hbase.regionserver.StoreFile: Delete Family Bloom filter type for hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6: CompoundBloomFilterWriter
2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.StoreFile: NO General Bloom and NO DeleteFamily was added to HFile (hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6) 
2014-04-29 16:23:37,694 INFO org.apache.hadoop.hbase.regionserver.Store: Flushed , sequenceid=819286, memsize=25.1 M, into tmp file hdfs://cdh-namenode:8020/hbase/gspt_jbxx/d6fd8d39289985adda9a3e048b92a24b/.tmp/0495ad0bb6dd467eafd78eafd66897d6

Does it normal with the InterruptedIOException?

Thanks!




井玉成

基础软件事业部
东软集团股份有限公司
手机:13889491801
电话:0411-84835702
 
大连市甘井子区黄浦路901号 D1座217室
Postcode:116085
Email:jingych@neusoft.com

From: jingych
Date: 2014-04-30 09:50
To: Jean-Marc Spaggiari; user
Subject: Re: Re: Java Client Write Data blocked
Thanks JM.

But the log is too big, How can I post the log file?

The query from HBase is slower too. 

The Network is OK, I'm sure.

Does GC have the log file ? And how to know the swap?

Sorry, I'm rookie.




jingych

From: Jean-Marc Spaggiari
Date: 2014-04-30 09:30
To: user; jingych
Subject: Re: Java Client Write Data blocked
Any logs? 


Gargabe collection on the server side? Network issue? Swap?

Please share your master and region servers logs so we can provide feedback.

JM




2014-04-29 21:26 GMT-04:00 jingych <ji...@neusoft.com>:

Hi, All!

I need help!

I run the java client to write 3million data into HBase,

but when wrote almost 1 million, the process was blocked without any exception.

Does anyone know the possible reason? So i can find the solution.

Thanks All!

By the way, the HBase Version is 0.94.6-cdh4.5.0!

Thanks again!




jingych
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: Re: Java Client Write Data blocked

Posted by jingych <ji...@neusoft.com>.
Thanks JM.

But the log is too big, How can I post the log file?

The query from HBase is slower too. 

The Network is OK, I'm sure.

Does GC have the log file ? And how to know the swap?

Sorry, I'm rookie.




jingych

From: Jean-Marc Spaggiari
Date: 2014-04-30 09:30
To: user; jingych
Subject: Re: Java Client Write Data blocked
Any logs? 


Gargabe collection on the server side? Network issue? Swap?

Please share your master and region servers logs so we can provide feedback.

JM




2014-04-29 21:26 GMT-04:00 jingych <ji...@neusoft.com>:

Hi, All!

I need help!

I run the java client to write 3million data into HBase,

but when wrote almost 1 million, the process was blocked without any exception.

Does anyone know the possible reason? So i can find the solution.

Thanks All!

By the way, the HBase Version is 0.94.6-cdh4.5.0!

Thanks again!




jingych
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: Java Client Write Data blocked

Posted by Jean-Marc Spaggiari <je...@spaggiari.org>.
Any logs?

Gargabe collection on the server side? Network issue? Swap?

Please share your master and region servers logs so we can provide feedback.

JM


2014-04-29 21:26 GMT-04:00 jingych <ji...@neusoft.com>:

> Hi, All!
>
> I need help!
>
> I run the java client to write 3million data into HBase,
>
> but when wrote almost 1 million, the process was blocked without any
> exception.
>
> Does anyone know the possible reason? So i can find the solution.
>
> Thanks All!
>
> By the way, the HBase Version is 0.94.6-cdh4.5.0!
>
> Thanks again!
>
>
>
>
> jingych
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
>  storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>