You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Onur AKTAS <on...@live.com> on 2009/08/03 04:14:31 UTC

Connection failure to HBase

Hi,

I have just installed Hadoop 19.3 (pseudo distributed mode) and Hbase 19.2 by following the instructions.
Both of them starts fine.

Hadoop Log:
$ bin/start-all.sh 
starting namenode, logging to /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
....

HBase Log:
$ bin/start-hbase.sh 
starting master, logging to /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-master-localhost.localdomain.out
localhost:
starting regionserver, logging to
/hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-regionserver-localhost.localdomain.out

When I try to connect HBase from a client, it gives an error as:

Aug 3, 2009 3:35:04 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection handleConnectionFailure
INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried 0 time
 (s).
Aug 3, 2009 3:35:05 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection handleConnectionFailure
INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried 1 time(s).

I
have configured sites.xml etc as "localhost:9000", How can I change
that 60000 port in client? I use like below in my Java class.
HBaseConfiguration config = new HBaseConfiguration();

Thanks.

_________________________________________________________________
Teker teker mi, yoksa hepsi birden mi? Arkadaşlarınızla ilgili güncel bilgileri tek bir yerden edinin.
http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx

RE: Connection failure to HBase

Posted by Onur AKTAS <on...@live.com>.
Sorry, I was trying with Hadoop 0.19.2 and with HBase 0.19.3 (I wrote Hadoop 0.19.3 and HBase 0.19.2 by mistake).
Anyway, now I try with Hadoop 0.20.0  and HBase 0.20.0. 

Here are my Hadoop configuration files.

core-site.xml:
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>
</configuration>

hdf-site.xml:
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>

mapred-site.xml:
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
  </property>
</configuration>

Here are my HBase configuration file.
hbase-site.xml
<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://localhost:9000/hbase</value>
    <description>The directory shared by region servers.
    </description>
  </property>
</configuration>

Thats all, HBase works fine when I try to use it standalone, but it could not make it work on Hadoop pseudo-distributed mode.
I'm going to ask Hadoop list too..
Thanks.

> Date: Mon, 3 Aug 2009 14:39:33 -0400
> Subject: Re: Connection failure to HBase
> From: jdcryans@apache.org
> To: hbase-user@hadoop.apache.org
> 
> I see many problems here.
> 
> First, it seems you are trying to use HBase 0.19 with Hadoop 0.20. As
> the HBase 0.19 doc says, it's only working on Hadoop 0.19.x. Also in
> your first email you told us that you are using Hadoop 0.19.3 (which
> btw isn't released and 0.19.2 just was) so that's quite confusing.
> 
> Also Hadoop by defaults writes in /tmp/hadoop-#{username} so there
> must be something wrong in your configuration if it's trying use it as
> the filesystem. The exception you see normally means that the Namenode
> wasn't able to assign data to Datanodes at all. Please confirm that
> your Hadoop configuration is ok and further Hadoop-related questions
> should be directed at their mailing list.
> 
> After you sorted out these problems, it will be much easier to run HBase.
> 
> Cheers,
> 
> J-D
> 
> 2009/8/3 Onur AKTAS <on...@live.com>:
> >
> > Here is what I do.
> >
> > Pseudo-Distributed Operation in: http://hadoop.apache.org/common/docs/current/quickstart.html
> > I edit
> >
> > conf/core-site.xml,
> > conf/hdfs-site.xm,
> > conf/mapred-site.xml:
> >
> >
> > $ bin/hadoop namenode -format
> >
> >
> > $ bin/start-all.shstarting namenode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> > localhost: starting datanode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> > localhost: starting secondarynamenode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-secondarynamenode-localhost.localdomain.out
> > starting jobtracker, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-jobtracker-localhost.localdomain.out
> > localhost: starting tasktracker, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-tasktracker-localhost.localdomain.out
> >
> >
> >
> > When I check the logs in hadoop-oracle-namenode-localhost.localdomain.log
> >  I see something like
> > 2009-08-03 21:26:20,757 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> >    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > 2009-08-03 21:26:21,177 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> >    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > 2009-08-03 21:26:21,982 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> >    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> > 2009-08-03 21:26:23,612 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> > java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> >    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
> >    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >    at java.lang.reflect.Method.invoke(Method.java:597)
> >    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >
> >
> > Whats the problem? And why does Hadoop choose /tmp directory for its file system? Because of pseudo distributed mode?
> >
> > Thanks.
> >
> >> Date: Mon, 3 Aug 2009 09:33:20 -0400
> >> Subject: Re: Connection failure to HBase
> >> From: jdcryans@apache.org
> >> To: hbase-user@hadoop.apache.org
> >>
> >> If the client is not able to talk to the Master, it means that
> >> something wrong happened there that prevents it to start. Look in the
> >> master's log you should see an exception.
> >>
> >> J-D
> >>
> >> 2009/8/3 Onur AKTAS <on...@live.com>:
> >> >
> >> > No, this is what after I changed.
> >> >
> >> > I was using like below, but it was not working. It was giving an exception like "INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried"
> >> > <property>
> >> >    <name>hbase.rootdir</name>
> >> >    <value>hdfs://localhost:9000/hbase</value>
> >> >    <description>The directory shared by region servers.
> >> >    </description>
> >> > </property>
> >> >
> >> >
> >> >> Date: Mon, 3 Aug 2009 08:36:09 -0400
> >> >> Subject: Re: Connection failure to HBase
> >> >> From: jdcryans@apache.org
> >> >> To: hbase-user@hadoop.apache.org
> >> >>
> >> >> If this is all of your hbase-site.xml, you're not using Hadoop at all.
> >> >> Please review the Pseudo-distributed documentation for HBase.
> >> >>
> >> >> J-D
> >> >>
> >> >> 2009/8/3 Onur AKTAS <on...@live.com>:
> >> >> >
> >> >> > I have changed hbase-site.xml as below, and it now works (in Local mode). Its something about Hadoop maybe?
> >> >> >
> >> >> > <configuration>
> >> >> >  <property>
> >> >> >    <name>hbase.master</name>
> >> >> >    <value>localhost:60000</value>
> >> >> >    <description>The directory shared by region servers.
> >> >> >    </description>
> >> >> >  </property>
> >> >> >  <property>
> >> >> >    <name>hbase.regionserver</name>
> >> >> >    <value>localhost:60020</value>
> >> >> >  </property>
> >> >> > </configuration>
> >> >> >
> >> >> >
> >> >> >> Date: Sun, 2 Aug 2009 19:25:16 -0700
> >> >> >> Subject: Re: Connection failure to HBase
> >> >> >> From: vpuranik@gmail.com
> >> >> >> To: hbase-user@hadoop.apache.org
> >> >> >>
> >> >> >> You can set hbase.master property on the configuration object:
> >> >> >>
> >> >> >> config.set("hbase.master", "localhost:9000");
> >> >> >>
> >> >> >> Regards,
> >> >> >> Vaibhav
> >> >> >>
> >> >> >> 2009/8/2 Onur AKTAS <on...@live.com>
> >> >> >>
> >> >> >> >
> >> >> >> > Hi,
> >> >> >> >
> >> >> >> > I have just installed Hadoop 19.3 (pseudo distributed mode) and Hbase 19.2
> >> >> >> > by following the instructions.
> >> >> >> > Both of them starts fine.
> >> >> >> >
> >> >> >> > Hadoop Log:
> >> >> >> > $ bin/start-all.sh
> >> >> >> > starting namenode, logging to
> >> >> >> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> >> >> >> > localhost: starting datanode, logging to
> >> >> >> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> >> >> >> > ....
> >> >> >> >
> >> >> >> > HBase Log:
> >> >> >> > $ bin/start-hbase.sh
> >> >> >> > starting master, logging to
> >> >> >> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-master-localhost.localdomain.out
> >> >> >> > localhost:
> >> >> >> > starting regionserver, logging to
> >> >> >> >
> >> >> >> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-regionserver-localhost.localdomain.out
> >> >> >> >
> >> >> >> > When I try to connect HBase from a client, it gives an error as:
> >> >> >> >
> >> >> >> > Aug 3, 2009 3:35:04 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
> >> >> >> > handleConnectionFailure
> >> >> >> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
> >> >> >> > 0 time
> >> >> >> >  (s).
> >> >> >> > Aug 3, 2009 3:35:05 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
> >> >> >> > handleConnectionFailure
> >> >> >> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
> >> >> >> > 1 time(s).
> >> >> >> >
> >> >> >> > I
> >> >> >> > have configured sites.xml etc as "localhost:9000", How can I change
> >> >> >> > that 60000 port in client? I use like below in my Java class.
> >> >> >> > HBaseConfiguration config = new HBaseConfiguration();
> >> >> >> >
> >> >> >> > Thanks.
> >> >> >> >
> >> >> >> > _________________________________________________________________
> >> >> >> > Teker teker mi, yoksa hepsi birden mi? Arkadaşlarınızla ilgili güncel
> >> >> >> > bilgileri tek bir yerden edinin.
> >> >> >> >
> >> >> >> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
> >> >> >
> >> >> > _________________________________________________________________
> >> >> > Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
> >> >> > http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx
> >> >
> >> > _________________________________________________________________
> >> > Windows Live tüm arkadaşlarınızla tek bir yerden iletişim kurmanıza yardımcı olur.
> >> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
> >
> > _________________________________________________________________
> > Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
> > http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx

_________________________________________________________________
Sadece e-posta iletilerinden daha fazlası: Diğer Windows Live™ özelliklerine göz atın.
http://www.microsoft.com/turkiye/windows/windowslive/

Re: Connection failure to HBase

Posted by Jean-Daniel Cryans <jd...@apache.org>.
I see many problems here.

First, it seems you are trying to use HBase 0.19 with Hadoop 0.20. As
the HBase 0.19 doc says, it's only working on Hadoop 0.19.x. Also in
your first email you told us that you are using Hadoop 0.19.3 (which
btw isn't released and 0.19.2 just was) so that's quite confusing.

Also Hadoop by defaults writes in /tmp/hadoop-#{username} so there
must be something wrong in your configuration if it's trying use it as
the filesystem. The exception you see normally means that the Namenode
wasn't able to assign data to Datanodes at all. Please confirm that
your Hadoop configuration is ok and further Hadoop-related questions
should be directed at their mailing list.

After you sorted out these problems, it will be much easier to run HBase.

Cheers,

J-D

2009/8/3 Onur AKTAS <on...@live.com>:
>
> Here is what I do.
>
> Pseudo-Distributed Operation in: http://hadoop.apache.org/common/docs/current/quickstart.html
> I edit
>
> conf/core-site.xml,
> conf/hdfs-site.xm,
> conf/mapred-site.xml:
>
>
> $ bin/hadoop namenode -format
>
>
> $ bin/start-all.shstarting namenode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> localhost: starting datanode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> localhost: starting secondarynamenode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-secondarynamenode-localhost.localdomain.out
> starting jobtracker, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-jobtracker-localhost.localdomain.out
> localhost: starting tasktracker, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-tasktracker-localhost.localdomain.out
>
>
>
> When I check the logs in hadoop-oracle-namenode-localhost.localdomain.log
>  I see something like
> 2009-08-03 21:26:20,757 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 2009-08-03 21:26:21,177 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 2009-08-03 21:26:21,982 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 2009-08-03 21:26:23,612 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
> java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
>    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>    at java.lang.reflect.Method.invoke(Method.java:597)
>    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>    at java.security.AccessController.doPrivileged(Native Method)
>    at javax.security.auth.Subject.doAs(Subject.java:396)
>    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>
>
> Whats the problem? And why does Hadoop choose /tmp directory for its file system? Because of pseudo distributed mode?
>
> Thanks.
>
>> Date: Mon, 3 Aug 2009 09:33:20 -0400
>> Subject: Re: Connection failure to HBase
>> From: jdcryans@apache.org
>> To: hbase-user@hadoop.apache.org
>>
>> If the client is not able to talk to the Master, it means that
>> something wrong happened there that prevents it to start. Look in the
>> master's log you should see an exception.
>>
>> J-D
>>
>> 2009/8/3 Onur AKTAS <on...@live.com>:
>> >
>> > No, this is what after I changed.
>> >
>> > I was using like below, but it was not working. It was giving an exception like "INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried"
>> > <property>
>> >    <name>hbase.rootdir</name>
>> >    <value>hdfs://localhost:9000/hbase</value>
>> >    <description>The directory shared by region servers.
>> >    </description>
>> > </property>
>> >
>> >
>> >> Date: Mon, 3 Aug 2009 08:36:09 -0400
>> >> Subject: Re: Connection failure to HBase
>> >> From: jdcryans@apache.org
>> >> To: hbase-user@hadoop.apache.org
>> >>
>> >> If this is all of your hbase-site.xml, you're not using Hadoop at all.
>> >> Please review the Pseudo-distributed documentation for HBase.
>> >>
>> >> J-D
>> >>
>> >> 2009/8/3 Onur AKTAS <on...@live.com>:
>> >> >
>> >> > I have changed hbase-site.xml as below, and it now works (in Local mode). Its something about Hadoop maybe?
>> >> >
>> >> > <configuration>
>> >> >  <property>
>> >> >    <name>hbase.master</name>
>> >> >    <value>localhost:60000</value>
>> >> >    <description>The directory shared by region servers.
>> >> >    </description>
>> >> >  </property>
>> >> >  <property>
>> >> >    <name>hbase.regionserver</name>
>> >> >    <value>localhost:60020</value>
>> >> >  </property>
>> >> > </configuration>
>> >> >
>> >> >
>> >> >> Date: Sun, 2 Aug 2009 19:25:16 -0700
>> >> >> Subject: Re: Connection failure to HBase
>> >> >> From: vpuranik@gmail.com
>> >> >> To: hbase-user@hadoop.apache.org
>> >> >>
>> >> >> You can set hbase.master property on the configuration object:
>> >> >>
>> >> >> config.set("hbase.master", "localhost:9000");
>> >> >>
>> >> >> Regards,
>> >> >> Vaibhav
>> >> >>
>> >> >> 2009/8/2 Onur AKTAS <on...@live.com>
>> >> >>
>> >> >> >
>> >> >> > Hi,
>> >> >> >
>> >> >> > I have just installed Hadoop 19.3 (pseudo distributed mode) and Hbase 19.2
>> >> >> > by following the instructions.
>> >> >> > Both of them starts fine.
>> >> >> >
>> >> >> > Hadoop Log:
>> >> >> > $ bin/start-all.sh
>> >> >> > starting namenode, logging to
>> >> >> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
>> >> >> > localhost: starting datanode, logging to
>> >> >> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
>> >> >> > ....
>> >> >> >
>> >> >> > HBase Log:
>> >> >> > $ bin/start-hbase.sh
>> >> >> > starting master, logging to
>> >> >> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-master-localhost.localdomain.out
>> >> >> > localhost:
>> >> >> > starting regionserver, logging to
>> >> >> >
>> >> >> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-regionserver-localhost.localdomain.out
>> >> >> >
>> >> >> > When I try to connect HBase from a client, it gives an error as:
>> >> >> >
>> >> >> > Aug 3, 2009 3:35:04 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
>> >> >> > handleConnectionFailure
>> >> >> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
>> >> >> > 0 time
>> >> >> >  (s).
>> >> >> > Aug 3, 2009 3:35:05 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
>> >> >> > handleConnectionFailure
>> >> >> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
>> >> >> > 1 time(s).
>> >> >> >
>> >> >> > I
>> >> >> > have configured sites.xml etc as "localhost:9000", How can I change
>> >> >> > that 60000 port in client? I use like below in my Java class.
>> >> >> > HBaseConfiguration config = new HBaseConfiguration();
>> >> >> >
>> >> >> > Thanks.
>> >> >> >
>> >> >> > _________________________________________________________________
>> >> >> > Teker teker mi, yoksa hepsi birden mi? Arkadaşlarınızla ilgili güncel
>> >> >> > bilgileri tek bir yerden edinin.
>> >> >> >
>> >> >> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
>> >> >
>> >> > _________________________________________________________________
>> >> > Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
>> >> > http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx
>> >
>> > _________________________________________________________________
>> > Windows Live tüm arkadaşlarınızla tek bir yerden iletişim kurmanıza yardımcı olur.
>> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
>
> _________________________________________________________________
> Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
> http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx

RE: Connection failure to HBase

Posted by Onur AKTAS <on...@live.com>.
Here is what I do.

Pseudo-Distributed Operation in: http://hadoop.apache.org/common/docs/current/quickstart.html 
I edit 
        
conf/core-site.xml, 
conf/hdfs-site.xm, 
conf/mapred-site.xml:

          
$ bin/hadoop namenode -format

          
$ bin/start-all.shstarting namenode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
localhost: starting secondarynamenode, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-jobtracker-localhost.localdomain.out
localhost: starting tasktracker, logging to /hda3/ps/hadoop-0.20.0/bin/../logs/hadoop-oracle-tasktracker-localhost.localdomain.out



When I check the logs in hadoop-oracle-namenode-localhost.localdomain.log
 I see something like
2009-08-03 21:26:20,757 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
2009-08-03 21:26:21,177 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
2009-08-03 21:26:21,982 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
2009-08-03 21:26:23,612 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000, call addBlock(/tmp/hadoop-oracle/mapred/system/jobtracker.info, DFSClient_-1600979110) from 127.0.0.1:22460: error: java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /tmp/hadoop-oracle/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1256)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)


Whats the problem? And why does Hadoop choose /tmp directory for its file system? Because of pseudo distributed mode? 

Thanks.

> Date: Mon, 3 Aug 2009 09:33:20 -0400
> Subject: Re: Connection failure to HBase
> From: jdcryans@apache.org
> To: hbase-user@hadoop.apache.org
> 
> If the client is not able to talk to the Master, it means that
> something wrong happened there that prevents it to start. Look in the
> master's log you should see an exception.
> 
> J-D
> 
> 2009/8/3 Onur AKTAS <on...@live.com>:
> >
> > No, this is what after I changed.
> >
> > I was using like below, but it was not working. It was giving an exception like "INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried"
> > <property>
> >    <name>hbase.rootdir</name>
> >    <value>hdfs://localhost:9000/hbase</value>
> >    <description>The directory shared by region servers.
> >    </description>
> > </property>
> >
> >
> >> Date: Mon, 3 Aug 2009 08:36:09 -0400
> >> Subject: Re: Connection failure to HBase
> >> From: jdcryans@apache.org
> >> To: hbase-user@hadoop.apache.org
> >>
> >> If this is all of your hbase-site.xml, you're not using Hadoop at all.
> >> Please review the Pseudo-distributed documentation for HBase.
> >>
> >> J-D
> >>
> >> 2009/8/3 Onur AKTAS <on...@live.com>:
> >> >
> >> > I have changed hbase-site.xml as below, and it now works (in Local mode). Its something about Hadoop maybe?
> >> >
> >> > <configuration>
> >> >  <property>
> >> >    <name>hbase.master</name>
> >> >    <value>localhost:60000</value>
> >> >    <description>The directory shared by region servers.
> >> >    </description>
> >> >  </property>
> >> >  <property>
> >> >    <name>hbase.regionserver</name>
> >> >    <value>localhost:60020</value>
> >> >  </property>
> >> > </configuration>
> >> >
> >> >
> >> >> Date: Sun, 2 Aug 2009 19:25:16 -0700
> >> >> Subject: Re: Connection failure to HBase
> >> >> From: vpuranik@gmail.com
> >> >> To: hbase-user@hadoop.apache.org
> >> >>
> >> >> You can set hbase.master property on the configuration object:
> >> >>
> >> >> config.set("hbase.master", "localhost:9000");
> >> >>
> >> >> Regards,
> >> >> Vaibhav
> >> >>
> >> >> 2009/8/2 Onur AKTAS <on...@live.com>
> >> >>
> >> >> >
> >> >> > Hi,
> >> >> >
> >> >> > I have just installed Hadoop 19.3 (pseudo distributed mode) and Hbase 19.2
> >> >> > by following the instructions.
> >> >> > Both of them starts fine.
> >> >> >
> >> >> > Hadoop Log:
> >> >> > $ bin/start-all.sh
> >> >> > starting namenode, logging to
> >> >> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> >> >> > localhost: starting datanode, logging to
> >> >> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> >> >> > ....
> >> >> >
> >> >> > HBase Log:
> >> >> > $ bin/start-hbase.sh
> >> >> > starting master, logging to
> >> >> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-master-localhost.localdomain.out
> >> >> > localhost:
> >> >> > starting regionserver, logging to
> >> >> >
> >> >> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-regionserver-localhost.localdomain.out
> >> >> >
> >> >> > When I try to connect HBase from a client, it gives an error as:
> >> >> >
> >> >> > Aug 3, 2009 3:35:04 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
> >> >> > handleConnectionFailure
> >> >> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
> >> >> > 0 time
> >> >> >  (s).
> >> >> > Aug 3, 2009 3:35:05 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
> >> >> > handleConnectionFailure
> >> >> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
> >> >> > 1 time(s).
> >> >> >
> >> >> > I
> >> >> > have configured sites.xml etc as "localhost:9000", How can I change
> >> >> > that 60000 port in client? I use like below in my Java class.
> >> >> > HBaseConfiguration config = new HBaseConfiguration();
> >> >> >
> >> >> > Thanks.
> >> >> >
> >> >> > _________________________________________________________________
> >> >> > Teker teker mi, yoksa hepsi birden mi? Arkadaşlarınızla ilgili güncel
> >> >> > bilgileri tek bir yerden edinin.
> >> >> >
> >> >> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
> >> >
> >> > _________________________________________________________________
> >> > Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
> >> > http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx
> >
> > _________________________________________________________________
> > Windows Live tüm arkadaşlarınızla tek bir yerden iletişim kurmanıza yardımcı olur.
> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx

_________________________________________________________________
Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx

Re: Connection failure to HBase

Posted by Jean-Daniel Cryans <jd...@apache.org>.
If the client is not able to talk to the Master, it means that
something wrong happened there that prevents it to start. Look in the
master's log you should see an exception.

J-D

2009/8/3 Onur AKTAS <on...@live.com>:
>
> No, this is what after I changed.
>
> I was using like below, but it was not working. It was giving an exception like "INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried"
> <property>
>    <name>hbase.rootdir</name>
>    <value>hdfs://localhost:9000/hbase</value>
>    <description>The directory shared by region servers.
>    </description>
> </property>
>
>
>> Date: Mon, 3 Aug 2009 08:36:09 -0400
>> Subject: Re: Connection failure to HBase
>> From: jdcryans@apache.org
>> To: hbase-user@hadoop.apache.org
>>
>> If this is all of your hbase-site.xml, you're not using Hadoop at all.
>> Please review the Pseudo-distributed documentation for HBase.
>>
>> J-D
>>
>> 2009/8/3 Onur AKTAS <on...@live.com>:
>> >
>> > I have changed hbase-site.xml as below, and it now works (in Local mode). Its something about Hadoop maybe?
>> >
>> > <configuration>
>> >  <property>
>> >    <name>hbase.master</name>
>> >    <value>localhost:60000</value>
>> >    <description>The directory shared by region servers.
>> >    </description>
>> >  </property>
>> >  <property>
>> >    <name>hbase.regionserver</name>
>> >    <value>localhost:60020</value>
>> >  </property>
>> > </configuration>
>> >
>> >
>> >> Date: Sun, 2 Aug 2009 19:25:16 -0700
>> >> Subject: Re: Connection failure to HBase
>> >> From: vpuranik@gmail.com
>> >> To: hbase-user@hadoop.apache.org
>> >>
>> >> You can set hbase.master property on the configuration object:
>> >>
>> >> config.set("hbase.master", "localhost:9000");
>> >>
>> >> Regards,
>> >> Vaibhav
>> >>
>> >> 2009/8/2 Onur AKTAS <on...@live.com>
>> >>
>> >> >
>> >> > Hi,
>> >> >
>> >> > I have just installed Hadoop 19.3 (pseudo distributed mode) and Hbase 19.2
>> >> > by following the instructions.
>> >> > Both of them starts fine.
>> >> >
>> >> > Hadoop Log:
>> >> > $ bin/start-all.sh
>> >> > starting namenode, logging to
>> >> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
>> >> > localhost: starting datanode, logging to
>> >> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
>> >> > ....
>> >> >
>> >> > HBase Log:
>> >> > $ bin/start-hbase.sh
>> >> > starting master, logging to
>> >> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-master-localhost.localdomain.out
>> >> > localhost:
>> >> > starting regionserver, logging to
>> >> >
>> >> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-regionserver-localhost.localdomain.out
>> >> >
>> >> > When I try to connect HBase from a client, it gives an error as:
>> >> >
>> >> > Aug 3, 2009 3:35:04 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
>> >> > handleConnectionFailure
>> >> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
>> >> > 0 time
>> >> >  (s).
>> >> > Aug 3, 2009 3:35:05 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
>> >> > handleConnectionFailure
>> >> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
>> >> > 1 time(s).
>> >> >
>> >> > I
>> >> > have configured sites.xml etc as "localhost:9000", How can I change
>> >> > that 60000 port in client? I use like below in my Java class.
>> >> > HBaseConfiguration config = new HBaseConfiguration();
>> >> >
>> >> > Thanks.
>> >> >
>> >> > _________________________________________________________________
>> >> > Teker teker mi, yoksa hepsi birden mi? Arkadaşlarınızla ilgili güncel
>> >> > bilgileri tek bir yerden edinin.
>> >> >
>> >> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
>> >
>> > _________________________________________________________________
>> > Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
>> > http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx
>
> _________________________________________________________________
> Windows Live tüm arkadaşlarınızla tek bir yerden iletişim kurmanıza yardımcı olur.
> http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx

RE: Connection failure to HBase

Posted by Onur AKTAS <on...@live.com>.
No, this is what after I changed.

I was using like below, but it was not working. It was giving an exception like "INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried"
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://localhost:9000/hbase</value>
    <description>The directory shared by region servers.
    </description>
</property>


> Date: Mon, 3 Aug 2009 08:36:09 -0400
> Subject: Re: Connection failure to HBase
> From: jdcryans@apache.org
> To: hbase-user@hadoop.apache.org
> 
> If this is all of your hbase-site.xml, you're not using Hadoop at all.
> Please review the Pseudo-distributed documentation for HBase.
> 
> J-D
> 
> 2009/8/3 Onur AKTAS <on...@live.com>:
> >
> > I have changed hbase-site.xml as below, and it now works (in Local mode). Its something about Hadoop maybe?
> >
> > <configuration>
> >  <property>
> >    <name>hbase.master</name>
> >    <value>localhost:60000</value>
> >    <description>The directory shared by region servers.
> >    </description>
> >  </property>
> >  <property>
> >    <name>hbase.regionserver</name>
> >    <value>localhost:60020</value>
> >  </property>
> > </configuration>
> >
> >
> >> Date: Sun, 2 Aug 2009 19:25:16 -0700
> >> Subject: Re: Connection failure to HBase
> >> From: vpuranik@gmail.com
> >> To: hbase-user@hadoop.apache.org
> >>
> >> You can set hbase.master property on the configuration object:
> >>
> >> config.set("hbase.master", "localhost:9000");
> >>
> >> Regards,
> >> Vaibhav
> >>
> >> 2009/8/2 Onur AKTAS <on...@live.com>
> >>
> >> >
> >> > Hi,
> >> >
> >> > I have just installed Hadoop 19.3 (pseudo distributed mode) and Hbase 19.2
> >> > by following the instructions.
> >> > Both of them starts fine.
> >> >
> >> > Hadoop Log:
> >> > $ bin/start-all.sh
> >> > starting namenode, logging to
> >> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> >> > localhost: starting datanode, logging to
> >> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> >> > ....
> >> >
> >> > HBase Log:
> >> > $ bin/start-hbase.sh
> >> > starting master, logging to
> >> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-master-localhost.localdomain.out
> >> > localhost:
> >> > starting regionserver, logging to
> >> >
> >> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-regionserver-localhost.localdomain.out
> >> >
> >> > When I try to connect HBase from a client, it gives an error as:
> >> >
> >> > Aug 3, 2009 3:35:04 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
> >> > handleConnectionFailure
> >> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
> >> > 0 time
> >> >  (s).
> >> > Aug 3, 2009 3:35:05 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
> >> > handleConnectionFailure
> >> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
> >> > 1 time(s).
> >> >
> >> > I
> >> > have configured sites.xml etc as "localhost:9000", How can I change
> >> > that 60000 port in client? I use like below in my Java class.
> >> > HBaseConfiguration config = new HBaseConfiguration();
> >> >
> >> > Thanks.
> >> >
> >> > _________________________________________________________________
> >> > Teker teker mi, yoksa hepsi birden mi? Arkadaşlarınızla ilgili güncel
> >> > bilgileri tek bir yerden edinin.
> >> >
> >> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
> >
> > _________________________________________________________________
> > Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
> > http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx

_________________________________________________________________
Windows Live tüm arkadaşlarınızla tek bir yerden iletişim kurmanıza yardımcı olur.
http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx

Re: Connection failure to HBase

Posted by Jean-Daniel Cryans <jd...@apache.org>.
If this is all of your hbase-site.xml, you're not using Hadoop at all.
Please review the Pseudo-distributed documentation for HBase.

J-D

2009/8/3 Onur AKTAS <on...@live.com>:
>
> I have changed hbase-site.xml as below, and it now works (in Local mode). Its something about Hadoop maybe?
>
> <configuration>
>  <property>
>    <name>hbase.master</name>
>    <value>localhost:60000</value>
>    <description>The directory shared by region servers.
>    </description>
>  </property>
>  <property>
>    <name>hbase.regionserver</name>
>    <value>localhost:60020</value>
>  </property>
> </configuration>
>
>
>> Date: Sun, 2 Aug 2009 19:25:16 -0700
>> Subject: Re: Connection failure to HBase
>> From: vpuranik@gmail.com
>> To: hbase-user@hadoop.apache.org
>>
>> You can set hbase.master property on the configuration object:
>>
>> config.set("hbase.master", "localhost:9000");
>>
>> Regards,
>> Vaibhav
>>
>> 2009/8/2 Onur AKTAS <on...@live.com>
>>
>> >
>> > Hi,
>> >
>> > I have just installed Hadoop 19.3 (pseudo distributed mode) and Hbase 19.2
>> > by following the instructions.
>> > Both of them starts fine.
>> >
>> > Hadoop Log:
>> > $ bin/start-all.sh
>> > starting namenode, logging to
>> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
>> > localhost: starting datanode, logging to
>> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
>> > ....
>> >
>> > HBase Log:
>> > $ bin/start-hbase.sh
>> > starting master, logging to
>> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-master-localhost.localdomain.out
>> > localhost:
>> > starting regionserver, logging to
>> >
>> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-regionserver-localhost.localdomain.out
>> >
>> > When I try to connect HBase from a client, it gives an error as:
>> >
>> > Aug 3, 2009 3:35:04 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
>> > handleConnectionFailure
>> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
>> > 0 time
>> >  (s).
>> > Aug 3, 2009 3:35:05 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
>> > handleConnectionFailure
>> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
>> > 1 time(s).
>> >
>> > I
>> > have configured sites.xml etc as "localhost:9000", How can I change
>> > that 60000 port in client? I use like below in my Java class.
>> > HBaseConfiguration config = new HBaseConfiguration();
>> >
>> > Thanks.
>> >
>> > _________________________________________________________________
>> > Teker teker mi, yoksa hepsi birden mi? Arkadaşlarınızla ilgili güncel
>> > bilgileri tek bir yerden edinin.
>> >
>> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx
>
> _________________________________________________________________
> Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
> http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx

RE: Connection failure to HBase

Posted by Onur AKTAS <on...@live.com>.
I have changed hbase-site.xml as below, and it now works (in Local mode). Its something about Hadoop maybe?

<configuration>
  <property>
    <name>hbase.master</name>
    <value>localhost:60000</value>
    <description>The directory shared by region servers.
    </description>
  </property>
  <property>
    <name>hbase.regionserver</name>
    <value>localhost:60020</value>
  </property>
</configuration>


> Date: Sun, 2 Aug 2009 19:25:16 -0700
> Subject: Re: Connection failure to HBase
> From: vpuranik@gmail.com
> To: hbase-user@hadoop.apache.org
> 
> You can set hbase.master property on the configuration object:
> 
> config.set("hbase.master", "localhost:9000");
> 
> Regards,
> Vaibhav
> 
> 2009/8/2 Onur AKTAS <on...@live.com>
> 
> >
> > Hi,
> >
> > I have just installed Hadoop 19.3 (pseudo distributed mode) and Hbase 19.2
> > by following the instructions.
> > Both of them starts fine.
> >
> > Hadoop Log:
> > $ bin/start-all.sh
> > starting namenode, logging to
> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> > localhost: starting datanode, logging to
> > /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> > ....
> >
> > HBase Log:
> > $ bin/start-hbase.sh
> > starting master, logging to
> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-master-localhost.localdomain.out
> > localhost:
> > starting regionserver, logging to
> >
> > /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-regionserver-localhost.localdomain.out
> >
> > When I try to connect HBase from a client, it gives an error as:
> >
> > Aug 3, 2009 3:35:04 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
> > handleConnectionFailure
> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
> > 0 time
> >  (s).
> > Aug 3, 2009 3:35:05 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
> > handleConnectionFailure
> > INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
> > 1 time(s).
> >
> > I
> > have configured sites.xml etc as "localhost:9000", How can I change
> > that 60000 port in client? I use like below in my Java class.
> > HBaseConfiguration config = new HBaseConfiguration();
> >
> > Thanks.
> >
> > _________________________________________________________________
> > Teker teker mi, yoksa hepsi birden mi? Arkadaşlarınızla ilgili güncel
> > bilgileri tek bir yerden edinin.
> >
> > http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx

_________________________________________________________________
Windows Live ile fotoğraflarınızı organize edebilir, düzenleyebilir ve paylaşabilirsiniz.
http://www.microsoft.com/turkiye/windows/windowslive/products/photo-gallery-edit.aspx

Re: Connection failure to HBase

Posted by Vaibhav Puranik <vp...@gmail.com>.
You can set hbase.master property on the configuration object:

config.set("hbase.master", "localhost:9000");

Regards,
Vaibhav

2009/8/2 Onur AKTAS <on...@live.com>

>
> Hi,
>
> I have just installed Hadoop 19.3 (pseudo distributed mode) and Hbase 19.2
> by following the instructions.
> Both of them starts fine.
>
> Hadoop Log:
> $ bin/start-all.sh
> starting namenode, logging to
> /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-namenode-localhost.localdomain.out
> localhost: starting datanode, logging to
> /hda3/ps/hadoop-0.19.2/bin/../logs/hadoop-oracle-datanode-localhost.localdomain.out
> ....
>
> HBase Log:
> $ bin/start-hbase.sh
> starting master, logging to
> /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-master-localhost.localdomain.out
> localhost:
> starting regionserver, logging to
>
> /hda3/ps/hbase-0.19.3/bin/../logs/hbase-oracle-regionserver-localhost.localdomain.out
>
> When I try to connect HBase from a client, it gives an error as:
>
> Aug 3, 2009 3:35:04 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
> handleConnectionFailure
> INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
> 0 time
>  (s).
> Aug 3, 2009 3:35:05 AM org.apache.hadoop.hbase.ipc.HBaseClient$Connection
> handleConnectionFailure
> INFO: Retrying connect to server: localhost/127.0.0.1:60000. Already tried
> 1 time(s).
>
> I
> have configured sites.xml etc as "localhost:9000", How can I change
> that 60000 port in client? I use like below in my Java class.
> HBaseConfiguration config = new HBaseConfiguration();
>
> Thanks.
>
> _________________________________________________________________
> Teker teker mi, yoksa hepsi birden mi? Arkadaşlarınızla ilgili güncel
> bilgileri tek bir yerden edinin.
>
> http://www.microsoft.com/turkiye/windows/windowslive/products/social-network-connector.aspx