You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Thiago Jackiw <tj...@gmail.com> on 2007/12/05 03:52:23 UTC

Strange HBase behavior

I've just checked out hadoop from svn trunk, 'ant' compiled it by
following the instructions on the wiki and I'm getting a strange
behavior when going to HBase's web interface. Note that I don't even
have any data stored on hadoop's dfs nor have a crazy-custom cluster
configuration.

Tailing hbase's master log file this is what happens, first:
====
2007-12-04 18:25:18,625 WARN org.apache.hadoop.ipc.Server: Out of
Memory in server select
java.lang.OutOfMemoryError: Java heap space
	at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39)
	at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
	at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:503)
	at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:379)
	at org.apache.hadoop.ipc.Server$Listener.run(Server.java:296)


Then it turns into this:
====
2007-12-04 18:37:55,054 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: /192.168.9.8:60020. Already tried 1 time(s).
2007-12-04 18:37:56,055 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: /192.168.9.8:60020. Already tried 2 time(s).
2007-12-04 18:37:57,056 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: /192.168.9.8:60020. Already tried 3 time(s).
2007-12-04 18:37:58,056 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: /192.168.9.8:60020. Already tried 4 time(s).
2007-12-04 18:37:59,057 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: /192.168.9.8:60020. Already tried 5 time(s).
2007-12-04 18:38:00,058 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: /192.168.9.8:60020. Already tried 6 time(s).
2007-12-04 18:38:01,058 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: /192.168.9.8:60020. Already tried 7 time(s).
2007-12-04 18:38:02,059 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: /192.168.9.8:60020. Already tried 8 time(s).
2007-12-04 18:38:03,060 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: /192.168.9.8:60020. Already tried 9 time(s).
2007-12-04 18:38:04,060 INFO org.apache.hadoop.ipc.Client: Retrying
connect to server: /192.168.9.8:60020. Already tried 10 time(s)
java.net.ConnectException: Connection refused
	at java.net.PlainSocketImpl.socketConnect(Native Method)


I've setup my cluster as follows:

hadoop-site.xml
====
<configuration>
<property>
    <name>fs.default.name</name>
    <value>localhost:9000</value>
  </property>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
  </property>
</configuration>


hadoop-env.sh:
====
export JAVA_HOME=/usr/lib/java


hbase-site.xml
====
<configuration>
<property>
     <name>hbase.master</name>
     <value>localhost:60000</value>
     <description>The host and port that the HBase master runs at</description>
   </property>
   <property>
     <name>hbase.rootdir</name>
     <value>/hbase</value>
     <description>location of HBase instance in dfs</description>
   </property>
</configuration>


/etc/hosts
====
127.0.0.1     localhost
192.168.9.8 localhost

====

Any ideas?

Also, 'bin/stop-hbase.sh' isn't actually stopping the server when the
problem "org.apache.hadoop.ipc.Client: Retrying connect to server:
/192.168.9.8:60020. Already tried 8 time(s)" exists.

Thanks

Re: Strange HBase behavior

Posted by stack <st...@duboce.net>.
Try giving your hbase master more memory.  See item 3 in the FAQ:
http://wiki.apache.org/lucene-hadoop/Hbase/FAQ#3.
St.Ack

Thiago Jackiw wrote:
> I've just checked out hadoop from svn trunk, 'ant' compiled it by
> following the instructions on the wiki and I'm getting a strange
> behavior when going to HBase's web interface. Note that I don't even
> have any data stored on hadoop's dfs nor have a crazy-custom cluster
> configuration.
>
> Tailing hbase's master log file this is what happens, first:
> ====
> 2007-12-04 18:25:18,625 WARN org.apache.hadoop.ipc.Server: Out of
> Memory in server select
> java.lang.OutOfMemoryError: Java heap space
> 	at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:39)
> 	at java.nio.ByteBuffer.allocate(ByteBuffer.java:312)
> 	at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:503)
> 	at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:379)
> 	at org.apache.hadoop.ipc.Server$Listener.run(Server.java:296)
>
>
> Then it turns into this:
> ====
> 2007-12-04 18:37:55,054 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: /192.168.9.8:60020. Already tried 1 time(s).
> 2007-12-04 18:37:56,055 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: /192.168.9.8:60020. Already tried 2 time(s).
> 2007-12-04 18:37:57,056 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: /192.168.9.8:60020. Already tried 3 time(s).
> 2007-12-04 18:37:58,056 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: /192.168.9.8:60020. Already tried 4 time(s).
> 2007-12-04 18:37:59,057 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: /192.168.9.8:60020. Already tried 5 time(s).
> 2007-12-04 18:38:00,058 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: /192.168.9.8:60020. Already tried 6 time(s).
> 2007-12-04 18:38:01,058 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: /192.168.9.8:60020. Already tried 7 time(s).
> 2007-12-04 18:38:02,059 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: /192.168.9.8:60020. Already tried 8 time(s).
> 2007-12-04 18:38:03,060 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: /192.168.9.8:60020. Already tried 9 time(s).
> 2007-12-04 18:38:04,060 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: /192.168.9.8:60020. Already tried 10 time(s)
> java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
>
>
> I've setup my cluster as follows:
>
> hadoop-site.xml
> ====
> <configuration>
> <property>
>     <name>fs.default.name</name>
>     <value>localhost:9000</value>
>   </property>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>localhost:9001</value>
>   </property>
> </configuration>
>
>
> hadoop-env.sh:
> ====
> export JAVA_HOME=/usr/lib/java
>
>
> hbase-site.xml
> ====
> <configuration>
> <property>
>      <name>hbase.master</name>
>      <value>localhost:60000</value>
>      <description>The host and port that the HBase master runs at</description>
>    </property>
>    <property>
>      <name>hbase.rootdir</name>
>      <value>/hbase</value>
>      <description>location of HBase instance in dfs</description>
>    </property>
> </configuration>
>
>
> /etc/hosts
> ====
> 127.0.0.1     localhost
> 192.168.9.8 localhost
>
> ====
>
> Any ideas?
>
> Also, 'bin/stop-hbase.sh' isn't actually stopping the server when the
> problem "org.apache.hadoop.ipc.Client: Retrying connect to server:
> /192.168.9.8:60020. Already tried 8 time(s)" exists.
>
> Thanks
>