You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Nick Dimiduk <nd...@gmail.com> on 2013/07/30 03:13:19 UTC

Re: 60000 millis timeout while waiting for channel to be ready for read

Hi Shapoor,

Moving the conversation to the users list.

Have you solved your issue? Sorry you haven't gotten a response sooner -- I
think everyone is working overtime to get 0.96 released.

I'm assuming each put is independent of the others. You're not putting
100mm times to the same row, are you?  I'm also curious, did you pre-split
your table before starting all of those inserts?

In the log you pasted, it looks like host kcs-testhadoop02 is the one that
times out. Can you reproduce the event and provide for us the RegionServer
logs from the machine that times out around the time of the event. Please
use a pastebin service rather than pasting to the list directly.

Thanks,
Nick

On Tuesday, July 9, 2013, shapoor wrote:

> hello,
>
> i am doing a lot of saves in HBase. like 100,000,000 documents, each 100KB.
> before i start the program there are almost 18 connections after starting
> my
> 2 regions and one master cluster. the connections are zookeeper, hdfs and
> hbase. as the process of saving starts, i have repeatedly more connections
> (i guess for each time flushing) until i reach almost 80 connections and
> that's when the following exeption appears. now HBase manages to save the
> data somehow but it is not effective with so many connections. how do I
> save
> this problem??
>
> regards, shapoor
>
> 13/07/09 13:31:48 WARN client.HConnectionManager$HConnectionImplementation:
> Failed all from
>
> region=table2,doc-id-866604,1373369430484.09001c90b3d2a4c20b56c35bb976ff91.,
> hostname=kcs-testhadoop02, port=60020
> java.util.concurrent.ExecutionException: java.net.SocketTimeoutException:
> Call to kcs-testhadoop02/192.168.111.211:60020 failed on socket timeout
> exception: java.net.SocketTimeoutException: 60000 millis timeout while
> waiting for channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected local=/192.168.111.72:55354
> remote=kcs-testhadoop02/192.168.111.211:60020]
>         at java.util.concurrent.FutureTask$Sync.innerGet(Unknown Source)
>         at java.util.concurrent.FutureTask.get(Unknown Source)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1598)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1450)
>         at
> org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:916)
>         at at.knowcenter.backends.HBaseStorage.flush(HBaseStorage.java:324)
>         at at.knowcenter.evaltool.Evaluate.save(Evaluate.java:155)
>         at
> at.knowcenter.evaltool.Evaluate.performSaveEvaluation(Evaluate.java:100)
>         at at.knowcenter.evaltool.Evaluate.evaluate(Evaluate.java:77)
>         at
> at.knowcenter.evaltool.EvaluationTool.execute(EvaluationTool.java:144)
>         at
> at.knowcenter.evaltool.EvaluationTool.main(EvaluationTool.java:199)
> Caused by: java.net.SocketTimeoutException: Call to
> kcs-testhadoop02/192.168.111.211:60020 failed on socket timeout exception:
> java.net.SocketTimeoutException: 60000 millis timeout while waiting for
> channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected
> local=/192.168.111.72:55354 remote=kcs-testhadoop02/192.168.111.211:60020]
>         at
>
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1026)
>         at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:999)
>         at
>
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
>         at com.sun.proxy.$Proxy6.multi(Unknown Source)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1427)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3$1.call(HConnectionManager.java:1425)
>         at
>
> org.apache.hadoop.hbase.client.ServerCallable.withoutRetries(ServerCallable.java:215)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1434)
>         at
>
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation$3.call(HConnectionManager.java:1422)
>         at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
>         at java.util.concurrent.FutureTask.run(Unknown Source)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
>         at java.lang.Thread.run(Unknown Source)
> Caused by: java.net.SocketTimeoutException: 60000 millis timeout while
> waiting for channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected local=/192.168.111.72:55354
> remote=kcs-testhadoop02/192.168.111.211:60020]
>         at
>
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>         at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
>         at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
>         at java.io.FilterInputStream.read(Unknown Source)
>         at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection$PingInputStream.read(HBaseClient.java:373)
>         at java.io.BufferedInputStream.fill(Unknown Source)
>         at java.io.BufferedInputStream.read(Unknown Source)
>         at java.io.DataInputStream.readInt(Unknown Source)
>         at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:646)
>         at
>
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:580)
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/60000-millis-timeout-while-waiting-for-channel-to-be-ready-for-read-tp4047612.html
> Sent from the HBase Developer mailing list archive at Nabble.com.
>

Re: 60000 millis timeout while waiting for channel to be ready for read

Posted by Nick Dimiduk <nd...@gmail.com>.
Hi Shapoor,

Sorry for the late response. Hopefully you've resolved your issue already.

HBase-0.96.1 was discarded, replaced by 0.96.1.1. You should use the the
appropriate build of that version, so yes, "0.96.1.1-hadoop2" sounds
correct for you.


On Wed, Jan 29, 2014 at 1:18 AM, shapoor <es...@yahoo.com> wrote:

> hi Nick,
> thanks for the tipp. I was able to get hbase running with "0.96.1-hadoop2".
> Now I have a problem with Jersey and I think I has to do with this new
> hbase
> version.
> "0.96.1-hadoop2" is on the official download list of hbase. Should I not
> use
> it at all and instead use "0.96.1.1-hadoop2"? What would you sujest?
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/60000-millis-timeout-while-waiting-for-channel-to-be-ready-for-read-tp4047612p4055440.html
> Sent from the HBase Developer mailing list archive at Nabble.com.
>

Re: 60000 millis timeout while waiting for channel to be ready for read

Posted by shapoor <es...@yahoo.com>.
hi Nick,
thanks for the tipp. I was able to get hbase running with "0.96.1-hadoop2".
Now I have a problem with Jersey and I think I has to do with this new hbase
version. 
"0.96.1-hadoop2" is on the official download list of hbase. Should I not use
it at all and instead use "0.96.1.1-hadoop2"? What would you sujest?



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/60000-millis-timeout-while-waiting-for-channel-to-be-ready-for-read-tp4047612p4055440.html
Sent from the HBase Developer mailing list archive at Nabble.com.

Re: 60000 millis timeout while waiting for channel to be ready for read

Posted by Nick Dimiduk <nd...@gmail.com>.
Moving to user@hbase

Hi Shapoor,

You're looking for the 0.96.1.1 releases. 0.96.1 had an incompatible flaw,
so that release was sunk almost immediately after it was pushed.

Have a look at
http://mvnrepository.com/artifact/org.apache.hbase/hbase/0.96.1.1-hadoop2for
the jars you need.

Thanks,
Nick


On Tue, Jan 28, 2014 at 6:17 AM, shapoor <es...@yahoo.com> wrote:

> Hi Nick,
> the problem I solved a long time ago. Apparently there was a combination of
> properties I found out that expanded the time waited to run processes.
> Now I have a new issue with 0.96.1-hadoop2 using dfs of hadoop2.2.0. I have
> the following hbase dependency in my pom.xml :
>
>                 <dependency>
>                         <groupId>org.apache.hbase</groupId>
>                         <artifactId>hbase-hadoop-compat</artifactId>
>                         <version>0.96.1-hadoop2</version>
>                 </dependency>
>
> it just can't find "0.96.1-hadoop2.jar". I generated jars from source code
> and still no 0.96.1-hadoop2.jar.
>
> could you know about the cause?
>
> PS: I was not here for a long time and didn't see your post. Sry.
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/60000-millis-timeout-while-waiting-for-channel-to-be-ready-for-read-tp4047612p4055396.html
> Sent from the HBase Developer mailing list archive at Nabble.com.
>

Re: 60000 millis timeout while waiting for channel to be ready for read

Posted by Nick Dimiduk <nd...@gmail.com>.
Moving to user@hbase

Hi Shapoor,

You're looking for the 0.96.1.1 releases. 0.96.1 had an incompatible flaw,
so that release was sunk almost immediately after it was pushed.

Have a look at
http://mvnrepository.com/artifact/org.apache.hbase/hbase/0.96.1.1-hadoop2for
the jars you need.

Thanks,
Nick


On Tue, Jan 28, 2014 at 6:17 AM, shapoor <es...@yahoo.com> wrote:

> Hi Nick,
> the problem I solved a long time ago. Apparently there was a combination of
> properties I found out that expanded the time waited to run processes.
> Now I have a new issue with 0.96.1-hadoop2 using dfs of hadoop2.2.0. I have
> the following hbase dependency in my pom.xml :
>
>                 <dependency>
>                         <groupId>org.apache.hbase</groupId>
>                         <artifactId>hbase-hadoop-compat</artifactId>
>                         <version>0.96.1-hadoop2</version>
>                 </dependency>
>
> it just can't find "0.96.1-hadoop2.jar". I generated jars from source code
> and still no 0.96.1-hadoop2.jar.
>
> could you know about the cause?
>
> PS: I was not here for a long time and didn't see your post. Sry.
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/60000-millis-timeout-while-waiting-for-channel-to-be-ready-for-read-tp4047612p4055396.html
> Sent from the HBase Developer mailing list archive at Nabble.com.
>

Re: 60000 millis timeout while waiting for channel to be ready for read

Posted by shapoor <es...@yahoo.com>.
Hi Nick,
the problem I solved a long time ago. Apparently there was a combination of
properties I found out that expanded the time waited to run processes.
Now I have a new issue with 0.96.1-hadoop2 using dfs of hadoop2.2.0. I have
the following hbase dependency in my pom.xml :

		<dependency>
			<groupId>org.apache.hbase</groupId>
			<artifactId>hbase-hadoop-compat</artifactId>
			<version>0.96.1-hadoop2</version>
                </dependency>

it just can't find "0.96.1-hadoop2.jar". I generated jars from source code
and still no 0.96.1-hadoop2.jar.

could you know about the cause?

PS: I was not here for a long time and didn't see your post. Sry.



--
View this message in context: http://apache-hbase.679495.n3.nabble.com/60000-millis-timeout-while-waiting-for-channel-to-be-ready-for-read-tp4047612p4055396.html
Sent from the HBase Developer mailing list archive at Nabble.com.