You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Thibaut_ <tb...@blue.lu> on 2008/11/27 15:17:32 UTC

Problems running hbase on dfs

Hi,

I'm having some problems running hbase on a small cluster of 8 nodes running
hadoop. I already worked once, but I must have changed something and it
doesn't work anymore... I already spent hours trying to figure out what it
was, but I couldn't find the cause.

I'm using the hbase 18.1 release version and tried it with the 18.0,18.1 and
18.2 release of hadoop. My normal java applications can access the
distributed file system without any problems

Below is the exception I get when I try to start the hbase mater node. (The
hadoop namenode is running on server1 and I'm also starting the hbase master
node on server1). I suspected some type of jar version conflict, but I made
sure that I uploaded the original files of the distributions to the cluster,
while only keeping the conf directory with my settings.

Not using a shared directory on the dfs but a temporary local directory
(file:///...) brings the master up without any problems.


server1:/software/hbase-0.18.1# bin/hbase master start
08/11/27 14:51:22 ERROR master.HMaster: Can not start master
java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
        at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct                                                                                                                               
orAccessorImpl.java:39)
        at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC                                                                                                                               
onstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
Caused by: java.io.IOException: Call failed on local exception
        at org.apache.hadoop.ipc.Client.call(Client.java:718)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
        at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
        at
org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:103)
        at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:173)
        at
org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFil                                                                                                                               
eSystem.java:67)
        at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1339                                                                                                                               
)
        at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1351)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:213)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:118)
        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:177)
        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
        ... 6 more
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:375)
        at
org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:4                                                                                                                               
99)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:441)


Here is my hbase-site configuration:

   <property>
    <name>hbase.master</name>
    <value>server1:60000</value>
  </property>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://server1:50070/hbase</value>
  </property>
  <property>
    <name>hbase.regionserver.dns.interface</name>
    <value>eth0</value>
  </property>


Would appreciate any help!
Thanks,
Thibaut

-- 
View this message in context: http://www.nabble.com/Problems-running-hbase-on-dfs-tp20720031p20720031.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Problems running hbase on dfs

Posted by Thibaut_ <tb...@blue.lu>.
Thanks,

There was no entry in the log. But as you mentioned a datanode and I wasn't
running any datanode on that machine, I checked the port settings again and
it turned out I was connecting to the wrong daemon! (50010 instead on port
50070 with the default settings) Silly me :(

There should be a better error message though ;)

Thanks,
Thibaut



Andrew Purtell wrote:
> 
> Hello Thibaut,
> 
> What does the log in the corresponding data node show?
> 
> It does look like a jar version mismatch problem to me (EOF
> or socket timeout exceptions during RPC are a strong
> indicator of that), but as you say you have already checked
> into this. All the same perhaps you should verify again
> that things are running from the paths/directories you 
> expect and that every instance of the Hadoop and HBase jar
> files are exactly the same, using md5sum or similar. 
> 
> Thanks,
> 
>    - Andy
> 
> 
>> From: Thibaut_ <tb...@blue.lu>
>> Subject: Problems running hbase on dfs
>> To: hbase-user@hadoop.apache.org
>> Date: Thursday, November 27, 2008, 6:17 AM
>> Hi,
>> 
>> I'm having some problems running hbase on a small
>> cluster of 8 nodes running hadoop. I already worked
>> once, but I must have changed something and it
>> doesn't work anymore... I already spent hours trying to
>> figure out what it was, but I couldn't find the cause.
>> 
>> I'm using the hbase 18.1 release version and tried it
>> with the 18.0,18.1 and 18.2 release of hadoop. My
>> normal java applications can access the
>> distributed file system without any problems
>> 
>> Below is the exception I get when I try to start the hbase
>> mater node. (The hadoop namenode is running on server1 and
>> I'm also starting the hbase master node on server1). I
>> suspected some type of jar version conflict, but I made
>> sure that I uploaded the original files of the
>> distributions to the cluster, while only keeping the conf
>> directory with my settings.
>> 
>> Not using a shared directory on the dfs but a temporary
>> local directory (file:///...) brings the master up without
>> any problems.
>> 
>> 
>> server1:/software/hbase-0.18.1# bin/hbase master start
>> 08/11/27 14:51:22 ERROR master.HMaster: Can not start
>> master
>> java.lang.reflect.InvocationTargetException
>>         at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>         at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
>>                                                             
>>                                                             
>>     
>> orAccessorImpl.java:39)
>>         at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
>>                                                             
>>                                                             
>>     
>> onstructorAccessorImpl.java:27)
>>         at
>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>         at
>> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
>>         at
>> org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
>> Caused by: java.io.IOException: Call failed on local
>> exception
>>         at
>> org.apache.hadoop.ipc.Client.call(Client.java:718)
>>         at
>> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>>         at
>> org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown
>> Source)
>>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
>>         at
>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:103)
>>         at
>> org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:173)
>>         at
>> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFil
>>                                                             
>>                                                             
>>     
>> eSystem.java:67)
>>         at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1339
>>                                                             
>>                                                             
>>     
>> )
>>         at
>> org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
>>         at
>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1351)
>>         at
>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:213)
>>         at
>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:118)
>>         at
>> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:177)
>>         at
>> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
>>         ... 6 more
>> Caused by: java.io.EOFException
>>         at
>> java.io.DataInputStream.readInt(DataInputStream.java:375)
>>         at
>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:4
>>                                                             
>>                                                             
>>     
>> 99)
>>         at
>> org.apache.hadoop.ipc.Client$Connection.run(Client.java:441)
>> 
>> 
>> Here is my hbase-site configuration:
>> 
>>    <property>
>>     <name>hbase.master</name>
>>     <value>server1:60000</value>
>>   </property>
>>   <property>
>>     <name>hbase.rootdir</name>
>>     <value>hdfs://server1:50070/hbase</value>
>>   </property>
>>   <property>
>>    
>> <name>hbase.regionserver.dns.interface</name>
>>     <value>eth0</value>
>>   </property>
>> 
>> 
>> Would appreciate any help!
>> Thanks,
>> Thibaut
> 
> 
>       
> 
> 

-- 
View this message in context: http://www.nabble.com/Problems-running-hbase-on-dfs-tp20720031p20726135.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Problems running hbase on dfs

Posted by Andrew Purtell <ap...@yahoo.com>.
Hello Thibaut,

What does the log in the corresponding data node show?

It does look like a jar version mismatch problem to me (EOF
or socket timeout exceptions during RPC are a strong
indicator of that), but as you say you have already checked
into this. All the same perhaps you should verify again
that things are running from the paths/directories you 
expect and that every instance of the Hadoop and HBase jar
files are exactly the same, using md5sum or similar. 

Thanks,

   - Andy


> From: Thibaut_ <tb...@blue.lu>
> Subject: Problems running hbase on dfs
> To: hbase-user@hadoop.apache.org
> Date: Thursday, November 27, 2008, 6:17 AM
> Hi,
> 
> I'm having some problems running hbase on a small
> cluster of 8 nodes running hadoop. I already worked
> once, but I must have changed something and it
> doesn't work anymore... I already spent hours trying to
> figure out what it was, but I couldn't find the cause.
> 
> I'm using the hbase 18.1 release version and tried it
> with the 18.0,18.1 and 18.2 release of hadoop. My
> normal java applications can access the
> distributed file system without any problems
> 
> Below is the exception I get when I try to start the hbase
> mater node. (The hadoop namenode is running on server1 and
> I'm also starting the hbase master node on server1). I
> suspected some type of jar version conflict, but I made
> sure that I uploaded the original files of the
> distributions to the cluster, while only keeping the conf
> directory with my settings.
> 
> Not using a shared directory on the dfs but a temporary
> local directory (file:///...) brings the master up without
> any problems.
> 
> 
> server1:/software/hbase-0.18.1# bin/hbase master start
> 08/11/27 14:51:22 ERROR master.HMaster: Can not start
> master
> java.lang.reflect.InvocationTargetException
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
>                                                             
>                                                             
>     
> orAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
>                                                             
>                                                             
>     
> onstructorAccessorImpl.java:27)
>         at
> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>         at
> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
>         at
> org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
> Caused by: java.io.IOException: Call failed on local
> exception
>         at
> org.apache.hadoop.ipc.Client.call(Client.java:718)
>         at
> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>         at
> org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown
> Source)
>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
>         at
> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:103)
>         at
> org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:173)
>         at
> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFil
>                                                             
>                                                             
>     
> eSystem.java:67)
>         at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1339
>                                                             
>                                                             
>     
> )
>         at
> org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
>         at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1351)
>         at
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:213)
>         at
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:118)
>         at
> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:177)
>         at
> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
>         ... 6 more
> Caused by: java.io.EOFException
>         at
> java.io.DataInputStream.readInt(DataInputStream.java:375)
>         at
> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:4
>                                                             
>                                                             
>     
> 99)
>         at
> org.apache.hadoop.ipc.Client$Connection.run(Client.java:441)
> 
> 
> Here is my hbase-site configuration:
> 
>    <property>
>     <name>hbase.master</name>
>     <value>server1:60000</value>
>   </property>
>   <property>
>     <name>hbase.rootdir</name>
>     <value>hdfs://server1:50070/hbase</value>
>   </property>
>   <property>
>    
> <name>hbase.regionserver.dns.interface</name>
>     <value>eth0</value>
>   </property>
> 
> 
> Would appreciate any help!
> Thanks,
> Thibaut