You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Ski Gh3 <sk...@gmail.com> on 2008/09/12 02:10:21 UTC

setting up hbase develop environment

Hi all,

I am a new comer to hbase/hadoop and I'm a little confused about setting up
the developing environment.

I thought hadoop comes with hbase (in contrib folder or so) but it's not in
the hadoop version I downloaded (0.18.0). So should I download hbase
seperately?
Then how can i put them into a single project and build together? (since i
am eventually interested in hbase, but may also want to make changes to
hadoop if needed).



Thanks!

Re: setting up hbase develop environment

Posted by Ski Gh3 <sk...@gmail.com>.
Thanks.  I downloaded those and had no problem for running.  Just wondering
what to do if I want to do some "customization" to one, or both.

hbase need hadoop in the class/lib path in order to run, right? If I
configure these projects separately, what do I do to have the changes in
hadoop
visible to hbase? rebuild a jar file and replace the old one?
This also raises another dumb question I was confused with: what's needed to
deploy hbase and hadoop? I do not expect to download the release with src
folder etc to every machine... Is there a deployment guide somewhere?

Thanks a lot!

On Fri, Sep 12, 2008 at 1:31 AM, Krzysztof Szlapinski <
krzysztof.szlapinski@starline.hk> wrote:

> Answers inline:
>
>> Hi all,
>>
>>
>>
> hi
>
>> I am a new comer to hbase/hadoop and I'm a little confused about setting
>> up
>> the developing environment.
>>
>> I thought hadoop comes with hbase (in contrib folder or so) but it's not
>> in
>> the hadoop version I downloaded (0.18.0). So should I download hbase
>> seperately?
>>
>>
> not it is not
> you should download
> hadoop 0.17.2.1
> and
> hbase 0.2.1 (http://people.apache.org/~stack/hbase-0.2.1-candidate-2/)
>
>> Then how can i put them into a single project and build together? (since i
>> am eventually interested in hbase, but may also want to make changes to
>> hadoop if needed).
>>
>>
> Hadoop and Hbase are not just one single projects them come with a set of
> tools and so on.
> Since you have to configure Hbase and hadoop it is good approach to keep
> these projects in a seperate directories
>
> krzysiek
>
>

Re: setting up hbase develop environment

Posted by Krzysztof Szlapinski <kr...@starline.hk>.
Answers inline:
> Hi all,
>
>   
hi
> I am a new comer to hbase/hadoop and I'm a little confused about setting up
> the developing environment.
>
> I thought hadoop comes with hbase (in contrib folder or so) but it's not in
> the hadoop version I downloaded (0.18.0). So should I download hbase
> seperately?
>   
not it is not
you should download
hadoop 0.17.2.1
and
hbase 0.2.1 (http://people.apache.org/~stack/hbase-0.2.1-candidate-2/)
> Then how can i put them into a single project and build together? (since i
> am eventually interested in hbase, but may also want to make changes to
> hadoop if needed).
>   
Hadoop and Hbase are not just one single projects them come with a set 
of tools and so on.
Since you have to configure Hbase and hadoop it is good approach to keep 
these projects in a seperate directories

krzysiek


Re: setting up hbase develop environment

Posted by Ski Gh3 <sk...@gmail.com>.
No, I just plan to make some "customization" that I don't think would be
generically useful...
Can you give me some help on my previous post, I' m still confused with the
setup to develop hadoop and hbase together, as well as for deployment..,
thanks!

On Fri, Sep 12, 2008 at 9:15 AM, Jim Kellerman <ji...@powerset.com> wrote:

> If you are planning to make changes, then you should check out the source
> trees for hbase and hadoop:
>
> http://svn.apache.org/repos/asf/hadoop/hbase/trunk (for hbase)
> http://svn.apache.org/repos/asf/hadoop/core/trunk (for hadoop)
>
> ---
> Jim Kellerman, Senior Software Development Engineer
> Powerset (Live Search, Microsoft Corporation)
>
>
> > -----Original Message-----
> > From: Ski Gh3 [mailto:skigh3@gmail.com]
> > Sent: Thursday, September 11, 2008 5:10 PM
> > To: hbase-user@hadoop.apache.org
> > Subject: setting up hbase develop environment
> >
> > Hi all,
> >
> > I am a new comer to hbase/hadoop and I'm a little confused about setting
> up
> > the developing environment.
> >
> > I thought hadoop comes with hbase (in contrib folder or so) but it's not
> in
> > the hadoop version I downloaded (0.18.0). So should I download hbase
> > seperately?
> > Then how can i put them into a single project and build together? (since
> i
> > am eventually interested in hbase, but may also want to make changes to
> > hadoop if needed).
> >
> >
> >
> > Thanks!
>

Re: .META error when I try to insert after truncating table

Posted by Ryan LeCompte <le...@gmail.com>.
Hey Preston,

Did you resolve this? I'm seeing the exact same error using HBase
0.2.0 and Hadoop 0.18.0.

Thanks!

Ryan


On Fri, Sep 12, 2008 at 2:25 PM, Preston Price <pr...@strands.com> wrote:
> I am using the hbase-default.xml that came with the hbase-0.2.0 download.
> The only config files I replaced are the hbase-env.sh, hbase-site.xml and
> regionservers files.
>
> I will take a stab at getting the RC up.
>
> Thanks
>
> -Preston
> On Sep 12, 2008, at 12:13 PM, Jean-Daniel Cryans wrote:
>
>> Preston,
>>
>> Have you copied the hbase-default from the new distribution? It is needed.
>> You should also jump right to 0.2.1RC2 (see the thread on the mailing list
>> for the link to the release).
>>
>> J-D
>>
>> On Fri, Sep 12, 2008 at 2:09 PM, Preston Price <pr...@strands.com> wrote:
>>
>>> I took Jean-Daniel Cryans' advice and am now trying to get Hadoop 0.18.0
>>> and HBase 0.2.0 up and running.
>>> I copied the configuration from the previous versions of HBase and Hadoop
>>> we had running, and with a slight modification I got hadoop going.
>>> I still can't get HBase 0.2.0 going.
>>> Here is the output from the master log:
>>>
>>> Fri Sep 12 12:01:23 MDT 2008 Starting master on atlas
>>> java version "1.5.0_15"
>>> Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15-b04)
>>> Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_15-b04, mixed mode)
>>> ulimit -n 1024
>>> 2008-09-12 12:02:24,157 ERROR org.apache.hadoop.hbase.master.HMaster: Can
>>> not start master
>>> java.lang.reflect.InvocationTargetException
>>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>>      at
>>>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>      at
>>>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>      at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
>>>      at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:798)
>>>      at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:832)
>>> Caused by: java.net.SocketTimeoutException: timed out waiting for rpc
>>> response
>>>      at org.apache.hadoop.ipc.Client.call(Client.java:559)
>>>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>>>      at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown Source)
>>>      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>>>      at
>>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
>>>      at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>>>      at
>>>
>>> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
>>>      at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1280)
>>>      at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
>>>      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1291)
>>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
>>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
>>>      at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:178)
>>>      at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:148)
>>>      ... 6 more
>>>
>>> It looks like it can't connect to the Hadoop DFS I have running, but I've
>>> confirmed that Hadoop is running by manipulating files on the DFS.
>>>
>>> Here is the hbase-site.xml I am using:
>>> <configuration>
>>>
>>> <property>
>>>  <name>hbase.master</name>
>>>  <value>atlas:60000</value>
>>>  <description>The host and port that the HBase master runs at.
>>>  </description>
>>> </property>
>>>
>>> <property>
>>>  <name>hbase.rootdir</name>
>>>  <value>hdfs://atlas:54310/hbase</value>
>>>  <description>The directory shared by region servers.
>>>  </description>
>>> </property>
>>>
>>> </configuration>
>>>
>>> Any ideas?
>>>
>>> Thanks!
>>>
>>> -Preston
>>>
>>>
>>> On Sep 12, 2008, at 11:10 AM, Jean-Daniel Cryans wrote:
>>>
>>> Preston,
>>>>
>>>> You should definitively upgrade to HBase 0.2.
>>>>
>>>> J-D
>>>>
>>>> On Fri, Sep 12, 2008 at 1:06 PM, Preston Price <pr...@strands.com>
>>>> wrote:
>>>>
>>>> I see this error every once in a while in our client logs:
>>>>>
>>>>> java.io.IOException: HRegionInfo was null or empty in .META.
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:429)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:350)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hbase.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:318)
>>>>>    at org.apache.hadoop.hbase.HTable.getRegionLocation(HTable.java:114)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hbase.HTable$ServerCallable.instantiateServer(HTable.java:1009)
>>>>>    at
>>>>>
>>>>>
>>>>> org.apache.hadoop.hbase.HTable.getRegionServerWithRetries(HTable.java:1024)
>>>>>    at org.apache.hadoop.hbase.HTable.commit(HTable.java:763)
>>>>>    at org.apache.hadoop.hbase.HTable.commit(HTable.java:744)
>>>>>
>>>>> I usually only see it after truncating our table like this:
>>>>> disable tableName;
>>>>> truncate table tableName;
>>>>> enable tableName;
>>>>>
>>>>> In our process that does the inserts we see it hang for a while on the
>>>>> first insert until it gets this error, and then starts inserting
>>>>> records
>>>>> with no problem.
>>>>>
>>>>> Is this something I should be concerned with?
>>>>> I am not familiar enough with what goes on 'under the hood' to know
>>>>> what
>>>>> this error is trying to tell me.
>>>>>
>>>>> Hadoop version: 0.16.2
>>>>> HBase version: 0.1.3
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> -Preston
>>>>>
>>>>>
>>>
>
>

RE: .META error when I try to insert after truncating table

Posted by Jonathan Gray <jl...@streamy.com>.
While not officially supported, Hadoop 0.18.0 runs fine on HBase 0.2.x

To make it work, you need to recompile HBase with the Hadoop 0.18 jars in
HBASEHOME/lib/ and remove all the 0.17 jars.

Then just ensure that your classpaths are all pointing to 0.18 jars and not
0.17.

I'm running live on 0.18.0 with 0.2.1 RC2 right now and there are no issues.

-----Original Message-----
From: Krzysztof Szlapinski [mailto:krzysztof.szlapinski@starline.hk] 
Sent: Friday, September 12, 2008 2:03 PM
To: hbase-user@hadoop.apache.org
Subject: Re: .META error when I try to insert after truncating table

Preston Price pisze:
> I still can not get hbase 0.2.0 or 0.2.1 to play nicely with Hadoop 
> 0.18.0
> I did notice in the hbase 0.2.0 docs this line under Requirements:
> Hadoop 0.17.x. This version of HBase will only run on this version of 
> Hadoop.
>
> Using Hadoop 0.17.2.1 I was able to get both 0.2.0 and 0.2.1 up and 
> running.
>
> So I am assuming that Hadoop 0.18.0 is unsupported for the time being?
>
As far as I know Hadoop 0.18.0 is not supported by Hbase 0.2.x
There are plans to change the versioning scheme to keep it consistent 
with the Hadoop versions
So I guess the next Hbase release will support Hadoop 0.18.0 and it will 
have the same version number (0.18)


> Thanks
>
> -Preston
>
> On Sep 12, 2008, at 12:25 PM, Preston Price wrote:
>
>> I am using the hbase-default.xml that came with the hbase-0.2.0 
>> download.
>> The only config files I replaced are the hbase-env.sh, hbase-site.xml 
>> and regionservers files.
>>
>> I will take a stab at getting the RC up.
>>
>> Thanks
>>
>> -Preston
>> On Sep 12, 2008, at 12:13 PM, Jean-Daniel Cryans wrote:
>>
>>> Preston,
>>>
>>> Have you copied the hbase-default from the new distribution? It is 
>>> needed.
>>> You should also jump right to 0.2.1RC2 (see the thread on the 
>>> mailing list
>>> for the link to the release).
>>>
>>> J-D
>>>
>>> On Fri, Sep 12, 2008 at 2:09 PM, Preston Price <pr...@strands.com> 
>>> wrote:
>>>
>>>> I took Jean-Daniel Cryans' advice and am now trying to get Hadoop 
>>>> 0.18.0
>>>> and HBase 0.2.0 up and running.
>>>> I copied the configuration from the previous versions of HBase and 
>>>> Hadoop
>>>> we had running, and with a slight modification I got hadoop going.
>>>> I still can't get HBase 0.2.0 going.
>>>> Here is the output from the master log:
>>>>
>>>> Fri Sep 12 12:01:23 MDT 2008 Starting master on atlas
>>>> java version "1.5.0_15"
>>>> Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15-b04)
>>>> Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_15-b04, mixed mode)
>>>> ulimit -n 1024
>>>> 2008-09-12 12:02:24,157 ERROR 
>>>> org.apache.hadoop.hbase.master.HMaster: Can
>>>> not start master
>>>> java.lang.reflect.InvocationTargetException
>>>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>>> Method)
>>>>      at
>>>>
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAcces
sorImpl.java:39) 
>>>>
>>>>      at
>>>>
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstruc
torAccessorImpl.java:27) 
>>>>
>>>>      at 
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:494)
>>>>      at 
>>>> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:798)
>>>>      at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:832)
>>>> Caused by: java.net.SocketTimeoutException: timed out waiting for rpc
>>>> response
>>>>      at org.apache.hadoop.ipc.Client.call(Client.java:559)
>>>>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>>>>      at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown 
>>>> Source)
>>>>      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>>>>      at
>>>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
>>>>      at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>>>>      at
>>>>
org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem
.java:68) 
>>>>
>>>>      at
>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1280)
>>>>      at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
>>>>      at 
>>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1291)
>>>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
>>>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
>>>>      at 
>>>> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:178)
>>>>      at 
>>>> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:148)
>>>>      ... 6 more
>>>>
>>>> It looks like it can't connect to the Hadoop DFS I have running, 
>>>> but I've
>>>> confirmed that Hadoop is running by manipulating files on the DFS.
>>>>
>>>> Here is the hbase-site.xml I am using:
>>>> <configuration>
>>>>
>>>> <property>
>>>>  <name>hbase.master</name>
>>>>  <value>atlas:60000</value>
>>>>  <description>The host and port that the HBase master runs at.
>>>>  </description>
>>>> </property>
>>>>
>>>> <property>
>>>>  <name>hbase.rootdir</name>
>>>>  <value>hdfs://atlas:54310/hbase</value>
>>>>  <description>The directory shared by region servers.
>>>>  </description>
>>>> </property>
>>>>
>>>> </configuration>
>>>>
>>>> Any ideas?
>>>>
>>>> Thanks!
>>>>
>>>> -Preston
>>>>
>>>>
>>>> On Sep 12, 2008, at 11:10 AM, Jean-Daniel Cryans wrote:
>>>>
>>>> Preston,
>>>>>
>>>>> You should definitively upgrade to HBase 0.2.
>>>>>
>>>>> J-D
>>>>>
>>>>> On Fri, Sep 12, 2008 at 1:06 PM, Preston Price <pr...@strands.com> 
>>>>> wrote:
>>>>>
>>>>> I see this error every once in a while in our client logs:
>>>>>>
>>>>>> java.io.IOException: HRegionInfo was null or empty in .META.
>>>>>>    at
>>>>>>
>>>>>>
org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegionInMeta(H
ConnectionManager.java:429) 
>>>>>>
>>>>>>    at
>>>>>>
>>>>>>
org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegion(HConnec
tionManager.java:350) 
>>>>>>
>>>>>>    at
>>>>>>
>>>>>>
org.apache.hadoop.hbase.HConnectionManager$TableServers.relocateRegion(HConn
ectionManager.java:318) 
>>>>>>
>>>>>>    at 
>>>>>> org.apache.hadoop.hbase.HTable.getRegionLocation(HTable.java:114)
>>>>>>    at
>>>>>>
>>>>>>
org.apache.hadoop.hbase.HTable$ServerCallable.instantiateServer(HTable.java:
1009) 
>>>>>>
>>>>>>    at
>>>>>>
>>>>>>
org.apache.hadoop.hbase.HTable.getRegionServerWithRetries(HTable.java:1024) 
>>>>>>
>>>>>>    at org.apache.hadoop.hbase.HTable.commit(HTable.java:763)
>>>>>>    at org.apache.hadoop.hbase.HTable.commit(HTable.java:744)
>>>>>>
>>>>>> I usually only see it after truncating our table like this:
>>>>>> disable tableName;
>>>>>> truncate table tableName;
>>>>>> enable tableName;
>>>>>>
>>>>>> In our process that does the inserts we see it hang for a while 
>>>>>> on the
>>>>>> first insert until it gets this error, and then starts inserting 
>>>>>> records
>>>>>> with no problem.
>>>>>>
>>>>>> Is this something I should be concerned with?
>>>>>> I am not familiar enough with what goes on 'under the hood' to 
>>>>>> know what
>>>>>> this error is trying to tell me.
>>>>>>
>>>>>> Hadoop version: 0.16.2
>>>>>> HBase version: 0.1.3
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> -Preston
>>>>>>
>>>>>>
>>>>
>>
>
>


Re: .META error when I try to insert after truncating table

Posted by Krzysztof Szlapinski <kr...@starline.hk>.
Preston Price pisze:
> I still can not get hbase 0.2.0 or 0.2.1 to play nicely with Hadoop 
> 0.18.0
> I did notice in the hbase 0.2.0 docs this line under Requirements:
> Hadoop 0.17.x. This version of HBase will only run on this version of 
> Hadoop.
>
> Using Hadoop 0.17.2.1 I was able to get both 0.2.0 and 0.2.1 up and 
> running.
>
> So I am assuming that Hadoop 0.18.0 is unsupported for the time being?
>
As far as I know Hadoop 0.18.0 is not supported by Hbase 0.2.x
There are plans to change the versioning scheme to keep it consistent 
with the Hadoop versions
So I guess the next Hbase release will support Hadoop 0.18.0 and it will 
have the same version number (0.18)


> Thanks
>
> -Preston
>
> On Sep 12, 2008, at 12:25 PM, Preston Price wrote:
>
>> I am using the hbase-default.xml that came with the hbase-0.2.0 
>> download.
>> The only config files I replaced are the hbase-env.sh, hbase-site.xml 
>> and regionservers files.
>>
>> I will take a stab at getting the RC up.
>>
>> Thanks
>>
>> -Preston
>> On Sep 12, 2008, at 12:13 PM, Jean-Daniel Cryans wrote:
>>
>>> Preston,
>>>
>>> Have you copied the hbase-default from the new distribution? It is 
>>> needed.
>>> You should also jump right to 0.2.1RC2 (see the thread on the 
>>> mailing list
>>> for the link to the release).
>>>
>>> J-D
>>>
>>> On Fri, Sep 12, 2008 at 2:09 PM, Preston Price <pr...@strands.com> 
>>> wrote:
>>>
>>>> I took Jean-Daniel Cryans' advice and am now trying to get Hadoop 
>>>> 0.18.0
>>>> and HBase 0.2.0 up and running.
>>>> I copied the configuration from the previous versions of HBase and 
>>>> Hadoop
>>>> we had running, and with a slight modification I got hadoop going.
>>>> I still can't get HBase 0.2.0 going.
>>>> Here is the output from the master log:
>>>>
>>>> Fri Sep 12 12:01:23 MDT 2008 Starting master on atlas
>>>> java version "1.5.0_15"
>>>> Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15-b04)
>>>> Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_15-b04, mixed mode)
>>>> ulimit -n 1024
>>>> 2008-09-12 12:02:24,157 ERROR 
>>>> org.apache.hadoop.hbase.master.HMaster: Can
>>>> not start master
>>>> java.lang.reflect.InvocationTargetException
>>>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>>> Method)
>>>>      at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) 
>>>>
>>>>      at
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) 
>>>>
>>>>      at 
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:494)
>>>>      at 
>>>> org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:798)
>>>>      at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:832)
>>>> Caused by: java.net.SocketTimeoutException: timed out waiting for rpc
>>>> response
>>>>      at org.apache.hadoop.ipc.Client.call(Client.java:559)
>>>>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>>>>      at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown 
>>>> Source)
>>>>      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>>>>      at
>>>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
>>>>      at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>>>>      at
>>>> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68) 
>>>>
>>>>      at
>>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1280)
>>>>      at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
>>>>      at 
>>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1291)
>>>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
>>>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
>>>>      at 
>>>> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:178)
>>>>      at 
>>>> org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:148)
>>>>      ... 6 more
>>>>
>>>> It looks like it can't connect to the Hadoop DFS I have running, 
>>>> but I've
>>>> confirmed that Hadoop is running by manipulating files on the DFS.
>>>>
>>>> Here is the hbase-site.xml I am using:
>>>> <configuration>
>>>>
>>>> <property>
>>>>  <name>hbase.master</name>
>>>>  <value>atlas:60000</value>
>>>>  <description>The host and port that the HBase master runs at.
>>>>  </description>
>>>> </property>
>>>>
>>>> <property>
>>>>  <name>hbase.rootdir</name>
>>>>  <value>hdfs://atlas:54310/hbase</value>
>>>>  <description>The directory shared by region servers.
>>>>  </description>
>>>> </property>
>>>>
>>>> </configuration>
>>>>
>>>> Any ideas?
>>>>
>>>> Thanks!
>>>>
>>>> -Preston
>>>>
>>>>
>>>> On Sep 12, 2008, at 11:10 AM, Jean-Daniel Cryans wrote:
>>>>
>>>> Preston,
>>>>>
>>>>> You should definitively upgrade to HBase 0.2.
>>>>>
>>>>> J-D
>>>>>
>>>>> On Fri, Sep 12, 2008 at 1:06 PM, Preston Price <pr...@strands.com> 
>>>>> wrote:
>>>>>
>>>>> I see this error every once in a while in our client logs:
>>>>>>
>>>>>> java.io.IOException: HRegionInfo was null or empty in .META.
>>>>>>    at
>>>>>>
>>>>>> org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:429) 
>>>>>>
>>>>>>    at
>>>>>>
>>>>>> org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:350) 
>>>>>>
>>>>>>    at
>>>>>>
>>>>>> org.apache.hadoop.hbase.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:318) 
>>>>>>
>>>>>>    at 
>>>>>> org.apache.hadoop.hbase.HTable.getRegionLocation(HTable.java:114)
>>>>>>    at
>>>>>>
>>>>>> org.apache.hadoop.hbase.HTable$ServerCallable.instantiateServer(HTable.java:1009) 
>>>>>>
>>>>>>    at
>>>>>>
>>>>>> org.apache.hadoop.hbase.HTable.getRegionServerWithRetries(HTable.java:1024) 
>>>>>>
>>>>>>    at org.apache.hadoop.hbase.HTable.commit(HTable.java:763)
>>>>>>    at org.apache.hadoop.hbase.HTable.commit(HTable.java:744)
>>>>>>
>>>>>> I usually only see it after truncating our table like this:
>>>>>> disable tableName;
>>>>>> truncate table tableName;
>>>>>> enable tableName;
>>>>>>
>>>>>> In our process that does the inserts we see it hang for a while 
>>>>>> on the
>>>>>> first insert until it gets this error, and then starts inserting 
>>>>>> records
>>>>>> with no problem.
>>>>>>
>>>>>> Is this something I should be concerned with?
>>>>>> I am not familiar enough with what goes on 'under the hood' to 
>>>>>> know what
>>>>>> this error is trying to tell me.
>>>>>>
>>>>>> Hadoop version: 0.16.2
>>>>>> HBase version: 0.1.3
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> -Preston
>>>>>>
>>>>>>
>>>>
>>
>
>


Re: .META error when I try to insert after truncating table

Posted by Preston Price <pr...@strands.com>.
I still can not get hbase 0.2.0 or 0.2.1 to play nicely with Hadoop  
0.18.0
I did notice in the hbase 0.2.0 docs this line under Requirements:
Hadoop 0.17.x. This version of HBase will only run on this version of  
Hadoop.

Using Hadoop 0.17.2.1 I was able to get both 0.2.0 and 0.2.1 up and  
running.

So I am assuming that Hadoop 0.18.0 is unsupported for the time being?

Thanks

-Preston

On Sep 12, 2008, at 12:25 PM, Preston Price wrote:

> I am using the hbase-default.xml that came with the hbase-0.2.0  
> download.
> The only config files I replaced are the hbase-env.sh, hbase- 
> site.xml and regionservers files.
>
> I will take a stab at getting the RC up.
>
> Thanks
>
> -Preston
> On Sep 12, 2008, at 12:13 PM, Jean-Daniel Cryans wrote:
>
>> Preston,
>>
>> Have you copied the hbase-default from the new distribution? It is  
>> needed.
>> You should also jump right to 0.2.1RC2 (see the thread on the  
>> mailing list
>> for the link to the release).
>>
>> J-D
>>
>> On Fri, Sep 12, 2008 at 2:09 PM, Preston Price <pr...@strands.com>  
>> wrote:
>>
>>> I took Jean-Daniel Cryans' advice and am now trying to get Hadoop  
>>> 0.18.0
>>> and HBase 0.2.0 up and running.
>>> I copied the configuration from the previous versions of HBase and  
>>> Hadoop
>>> we had running, and with a slight modification I got hadoop going.
>>> I still can't get HBase 0.2.0 going.
>>> Here is the output from the master log:
>>>
>>> Fri Sep 12 12:01:23 MDT 2008 Starting master on atlas
>>> java version "1.5.0_15"
>>> Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15- 
>>> b04)
>>> Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_15-b04, mixed mode)
>>> ulimit -n 1024
>>> 2008-09-12 12:02:24,157 ERROR  
>>> org.apache.hadoop.hbase.master.HMaster: Can
>>> not start master
>>> java.lang.reflect.InvocationTargetException
>>>      at  
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>>      at
>>> sun 
>>> .reflect 
>>> .NativeConstructorAccessorImpl 
>>> .newInstance(NativeConstructorAccessorImpl.java:39)
>>>      at
>>> sun 
>>> .reflect 
>>> .DelegatingConstructorAccessorImpl 
>>> .newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>      at java.lang.reflect.Constructor.newInstance(Constructor.java: 
>>> 494)
>>>      at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java: 
>>> 798)
>>>      at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java: 
>>> 832)
>>> Caused by: java.net.SocketTimeoutException: timed out waiting for  
>>> rpc
>>> response
>>>      at org.apache.hadoop.ipc.Client.call(Client.java:559)
>>>      at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>>>      at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown  
>>> Source)
>>>      at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>>>      at
>>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java: 
>>> 102)
>>>      at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>>>      at
>>> org 
>>> .apache 
>>> .hadoop 
>>> .dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
>>>      at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java: 
>>> 1280)
>>>      at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java: 
>>> 56)
>>>      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java: 
>>> 1291)
>>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
>>>      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
>>>      at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java: 
>>> 178)
>>>      at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java: 
>>> 148)
>>>      ... 6 more
>>>
>>> It looks like it can't connect to the Hadoop DFS I have running,  
>>> but I've
>>> confirmed that Hadoop is running by manipulating files on the DFS.
>>>
>>> Here is the hbase-site.xml I am using:
>>> <configuration>
>>>
>>> <property>
>>>  <name>hbase.master</name>
>>>  <value>atlas:60000</value>
>>>  <description>The host and port that the HBase master runs at.
>>>  </description>
>>> </property>
>>>
>>> <property>
>>>  <name>hbase.rootdir</name>
>>>  <value>hdfs://atlas:54310/hbase</value>
>>>  <description>The directory shared by region servers.
>>>  </description>
>>> </property>
>>>
>>> </configuration>
>>>
>>> Any ideas?
>>>
>>> Thanks!
>>>
>>> -Preston
>>>
>>>
>>> On Sep 12, 2008, at 11:10 AM, Jean-Daniel Cryans wrote:
>>>
>>> Preston,
>>>>
>>>> You should definitively upgrade to HBase 0.2.
>>>>
>>>> J-D
>>>>
>>>> On Fri, Sep 12, 2008 at 1:06 PM, Preston Price  
>>>> <pr...@strands.com> wrote:
>>>>
>>>> I see this error every once in a while in our client logs:
>>>>>
>>>>> java.io.IOException: HRegionInfo was null or empty in .META.
>>>>>    at
>>>>>
>>>>> org.apache.hadoop.hbase.HConnectionManager 
>>>>> $TableServers.locateRegionInMeta(HConnectionManager.java:429)
>>>>>    at
>>>>>
>>>>> org.apache.hadoop.hbase.HConnectionManager 
>>>>> $TableServers.locateRegion(HConnectionManager.java:350)
>>>>>    at
>>>>>
>>>>> org.apache.hadoop.hbase.HConnectionManager 
>>>>> $TableServers.relocateRegion(HConnectionManager.java:318)
>>>>>    at  
>>>>> org.apache.hadoop.hbase.HTable.getRegionLocation(HTable.java:114)
>>>>>    at
>>>>>
>>>>> org.apache.hadoop.hbase.HTable 
>>>>> $ServerCallable.instantiateServer(HTable.java:1009)
>>>>>    at
>>>>>
>>>>> org 
>>>>> .apache 
>>>>> .hadoop.hbase.HTable.getRegionServerWithRetries(HTable.java:1024)
>>>>>    at org.apache.hadoop.hbase.HTable.commit(HTable.java:763)
>>>>>    at org.apache.hadoop.hbase.HTable.commit(HTable.java:744)
>>>>>
>>>>> I usually only see it after truncating our table like this:
>>>>> disable tableName;
>>>>> truncate table tableName;
>>>>> enable tableName;
>>>>>
>>>>> In our process that does the inserts we see it hang for a while  
>>>>> on the
>>>>> first insert until it gets this error, and then starts inserting  
>>>>> records
>>>>> with no problem.
>>>>>
>>>>> Is this something I should be concerned with?
>>>>> I am not familiar enough with what goes on 'under the hood' to  
>>>>> know what
>>>>> this error is trying to tell me.
>>>>>
>>>>> Hadoop version: 0.16.2
>>>>> HBase version: 0.1.3
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>> -Preston
>>>>>
>>>>>
>>>
>


Re: .META error when I try to insert after truncating table

Posted by Preston Price <pr...@strands.com>.
I am using the hbase-default.xml that came with the hbase-0.2.0  
download.
The only config files I replaced are the hbase-env.sh, hbase-site.xml  
and regionservers files.

I will take a stab at getting the RC up.

Thanks

-Preston
On Sep 12, 2008, at 12:13 PM, Jean-Daniel Cryans wrote:

> Preston,
>
> Have you copied the hbase-default from the new distribution? It is  
> needed.
> You should also jump right to 0.2.1RC2 (see the thread on the  
> mailing list
> for the link to the release).
>
> J-D
>
> On Fri, Sep 12, 2008 at 2:09 PM, Preston Price <pr...@strands.com>  
> wrote:
>
>> I took Jean-Daniel Cryans' advice and am now trying to get Hadoop  
>> 0.18.0
>> and HBase 0.2.0 up and running.
>> I copied the configuration from the previous versions of HBase and  
>> Hadoop
>> we had running, and with a slight modification I got hadoop going.
>> I still can't get HBase 0.2.0 going.
>> Here is the output from the master log:
>>
>> Fri Sep 12 12:01:23 MDT 2008 Starting master on atlas
>> java version "1.5.0_15"
>> Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15-b04)
>> Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_15-b04, mixed mode)
>> ulimit -n 1024
>> 2008-09-12 12:02:24,157 ERROR  
>> org.apache.hadoop.hbase.master.HMaster: Can
>> not start master
>> java.lang.reflect.InvocationTargetException
>>       at  
>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>>       at
>> sun 
>> .reflect 
>> .NativeConstructorAccessorImpl 
>> .newInstance(NativeConstructorAccessorImpl.java:39)
>>       at
>> sun 
>> .reflect 
>> .DelegatingConstructorAccessorImpl 
>> .newInstance(DelegatingConstructorAccessorImpl.java:27)
>>       at java.lang.reflect.Constructor.newInstance(Constructor.java: 
>> 494)
>>       at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java: 
>> 798)
>>       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java: 
>> 832)
>> Caused by: java.net.SocketTimeoutException: timed out waiting for rpc
>> response
>>       at org.apache.hadoop.ipc.Client.call(Client.java:559)
>>       at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>>       at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown  
>> Source)
>>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>>       at
>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
>>       at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>>       at
>> org 
>> .apache 
>> .hadoop 
>> .dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
>>       at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java: 
>> 1280)
>>       at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java: 
>> 56)
>>       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java: 
>> 1291)
>>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
>>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
>>       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java: 
>> 178)
>>       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java: 
>> 148)
>>       ... 6 more
>>
>> It looks like it can't connect to the Hadoop DFS I have running,  
>> but I've
>> confirmed that Hadoop is running by manipulating files on the DFS.
>>
>> Here is the hbase-site.xml I am using:
>> <configuration>
>>
>> <property>
>>   <name>hbase.master</name>
>>   <value>atlas:60000</value>
>>   <description>The host and port that the HBase master runs at.
>>   </description>
>> </property>
>>
>> <property>
>>   <name>hbase.rootdir</name>
>>   <value>hdfs://atlas:54310/hbase</value>
>>   <description>The directory shared by region servers.
>>   </description>
>> </property>
>>
>> </configuration>
>>
>> Any ideas?
>>
>> Thanks!
>>
>> -Preston
>>
>>
>> On Sep 12, 2008, at 11:10 AM, Jean-Daniel Cryans wrote:
>>
>> Preston,
>>>
>>> You should definitively upgrade to HBase 0.2.
>>>
>>> J-D
>>>
>>> On Fri, Sep 12, 2008 at 1:06 PM, Preston Price <pr...@strands.com>  
>>> wrote:
>>>
>>> I see this error every once in a while in our client logs:
>>>>
>>>> java.io.IOException: HRegionInfo was null or empty in .META.
>>>>     at
>>>>
>>>> org.apache.hadoop.hbase.HConnectionManager 
>>>> $TableServers.locateRegionInMeta(HConnectionManager.java:429)
>>>>     at
>>>>
>>>> org.apache.hadoop.hbase.HConnectionManager 
>>>> $TableServers.locateRegion(HConnectionManager.java:350)
>>>>     at
>>>>
>>>> org.apache.hadoop.hbase.HConnectionManager 
>>>> $TableServers.relocateRegion(HConnectionManager.java:318)
>>>>     at  
>>>> org.apache.hadoop.hbase.HTable.getRegionLocation(HTable.java:114)
>>>>     at
>>>>
>>>> org.apache.hadoop.hbase.HTable 
>>>> $ServerCallable.instantiateServer(HTable.java:1009)
>>>>     at
>>>>
>>>> org 
>>>> .apache 
>>>> .hadoop.hbase.HTable.getRegionServerWithRetries(HTable.java:1024)
>>>>     at org.apache.hadoop.hbase.HTable.commit(HTable.java:763)
>>>>     at org.apache.hadoop.hbase.HTable.commit(HTable.java:744)
>>>>
>>>> I usually only see it after truncating our table like this:
>>>> disable tableName;
>>>> truncate table tableName;
>>>> enable tableName;
>>>>
>>>> In our process that does the inserts we see it hang for a while  
>>>> on the
>>>> first insert until it gets this error, and then starts inserting  
>>>> records
>>>> with no problem.
>>>>
>>>> Is this something I should be concerned with?
>>>> I am not familiar enough with what goes on 'under the hood' to  
>>>> know what
>>>> this error is trying to tell me.
>>>>
>>>> Hadoop version: 0.16.2
>>>> HBase version: 0.1.3
>>>>
>>>>
>>>> Thanks
>>>>
>>>> -Preston
>>>>
>>>>
>>


Re: .META error when I try to insert after truncating table

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Preston,

Have you copied the hbase-default from the new distribution? It is needed.
You should also jump right to 0.2.1RC2 (see the thread on the mailing list
for the link to the release).

J-D

On Fri, Sep 12, 2008 at 2:09 PM, Preston Price <pr...@strands.com> wrote:

> I took Jean-Daniel Cryans' advice and am now trying to get Hadoop 0.18.0
> and HBase 0.2.0 up and running.
> I copied the configuration from the previous versions of HBase and Hadoop
> we had running, and with a slight modification I got hadoop going.
> I still can't get HBase 0.2.0 going.
> Here is the output from the master log:
>
> Fri Sep 12 12:01:23 MDT 2008 Starting master on atlas
> java version "1.5.0_15"
> Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15-b04)
> Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_15-b04, mixed mode)
> ulimit -n 1024
> 2008-09-12 12:02:24,157 ERROR org.apache.hadoop.hbase.master.HMaster: Can
> not start master
> java.lang.reflect.InvocationTargetException
>        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>        at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>        at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>        at java.lang.reflect.Constructor.newInstance(Constructor.java:494)
>        at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:798)
>        at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:832)
> Caused by: java.net.SocketTimeoutException: timed out waiting for rpc
> response
>        at org.apache.hadoop.ipc.Client.call(Client.java:559)
>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>        at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown Source)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>        at
> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
>        at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>        at
> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
>        at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1280)
>        at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1291)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
>        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:178)
>        at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:148)
>        ... 6 more
>
> It looks like it can't connect to the Hadoop DFS I have running, but I've
> confirmed that Hadoop is running by manipulating files on the DFS.
>
> Here is the hbase-site.xml I am using:
> <configuration>
>
>  <property>
>    <name>hbase.master</name>
>    <value>atlas:60000</value>
>    <description>The host and port that the HBase master runs at.
>    </description>
>  </property>
>
>  <property>
>    <name>hbase.rootdir</name>
>    <value>hdfs://atlas:54310/hbase</value>
>    <description>The directory shared by region servers.
>    </description>
>  </property>
>
> </configuration>
>
> Any ideas?
>
> Thanks!
>
> -Preston
>
>
> On Sep 12, 2008, at 11:10 AM, Jean-Daniel Cryans wrote:
>
>  Preston,
>>
>> You should definitively upgrade to HBase 0.2.
>>
>> J-D
>>
>> On Fri, Sep 12, 2008 at 1:06 PM, Preston Price <pr...@strands.com> wrote:
>>
>>  I see this error every once in a while in our client logs:
>>>
>>> java.io.IOException: HRegionInfo was null or empty in .META.
>>>      at
>>>
>>> org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:429)
>>>      at
>>>
>>> org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:350)
>>>      at
>>>
>>> org.apache.hadoop.hbase.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:318)
>>>      at org.apache.hadoop.hbase.HTable.getRegionLocation(HTable.java:114)
>>>      at
>>>
>>> org.apache.hadoop.hbase.HTable$ServerCallable.instantiateServer(HTable.java:1009)
>>>      at
>>>
>>> org.apache.hadoop.hbase.HTable.getRegionServerWithRetries(HTable.java:1024)
>>>      at org.apache.hadoop.hbase.HTable.commit(HTable.java:763)
>>>      at org.apache.hadoop.hbase.HTable.commit(HTable.java:744)
>>>
>>> I usually only see it after truncating our table like this:
>>> disable tableName;
>>> truncate table tableName;
>>> enable tableName;
>>>
>>> In our process that does the inserts we see it hang for a while on the
>>> first insert until it gets this error, and then starts inserting records
>>> with no problem.
>>>
>>> Is this something I should be concerned with?
>>> I am not familiar enough with what goes on 'under the hood' to know what
>>> this error is trying to tell me.
>>>
>>> Hadoop version: 0.16.2
>>> HBase version: 0.1.3
>>>
>>>
>>> Thanks
>>>
>>> -Preston
>>>
>>>
>

Re: .META error when I try to insert after truncating table

Posted by Preston Price <pr...@strands.com>.
I took Jean-Daniel Cryans' advice and am now trying to get Hadoop  
0.18.0 and HBase 0.2.0 up and running.
I copied the configuration from the previous versions of HBase and  
Hadoop we had running, and with a slight modification I got hadoop  
going.
I still can't get HBase 0.2.0 going.
Here is the output from the master log:

Fri Sep 12 12:01:23 MDT 2008 Starting master on atlas
java version "1.5.0_15"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15-b04)
Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_15-b04, mixed mode)
ulimit -n 1024
2008-09-12 12:02:24,157 ERROR org.apache.hadoop.hbase.master.HMaster:  
Can not start master
java.lang.reflect.InvocationTargetException
         at  
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
         at  
sun 
.reflect 
.NativeConstructorAccessorImpl 
.newInstance(NativeConstructorAccessorImpl.java:39)
         at  
sun 
.reflect 
.DelegatingConstructorAccessorImpl 
.newInstance(DelegatingConstructorAccessorImpl.java:27)
         at java.lang.reflect.Constructor.newInstance(Constructor.java: 
494)
         at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java: 
798)
         at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java: 
832)
Caused by: java.net.SocketTimeoutException: timed out waiting for rpc  
response
         at org.apache.hadoop.ipc.Client.call(Client.java:559)
         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
         at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown  
Source)
         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
         at  
org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
         at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
         at  
org 
.apache 
.hadoop 
.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
         at  
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1280)
         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java: 
56)
         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java: 
1291)
         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:203)
         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:108)
         at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java: 
178)
         at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java: 
148)
         ... 6 more

It looks like it can't connect to the Hadoop DFS I have running, but  
I've confirmed that Hadoop is running by manipulating files on the DFS.

Here is the hbase-site.xml I am using:
<configuration>

   <property>
     <name>hbase.master</name>
     <value>atlas:60000</value>
     <description>The host and port that the HBase master runs at.
     </description>
   </property>

   <property>
     <name>hbase.rootdir</name>
     <value>hdfs://atlas:54310/hbase</value>
     <description>The directory shared by region servers.
     </description>
   </property>

</configuration>

Any ideas?

Thanks!

-Preston

On Sep 12, 2008, at 11:10 AM, Jean-Daniel Cryans wrote:

> Preston,
>
> You should definitively upgrade to HBase 0.2.
>
> J-D
>
> On Fri, Sep 12, 2008 at 1:06 PM, Preston Price <pr...@strands.com>  
> wrote:
>
>> I see this error every once in a while in our client logs:
>>
>> java.io.IOException: HRegionInfo was null or empty in .META.
>>       at
>> org.apache.hadoop.hbase.HConnectionManager 
>> $TableServers.locateRegionInMeta(HConnectionManager.java:429)
>>       at
>> org.apache.hadoop.hbase.HConnectionManager 
>> $TableServers.locateRegion(HConnectionManager.java:350)
>>       at
>> org.apache.hadoop.hbase.HConnectionManager 
>> $TableServers.relocateRegion(HConnectionManager.java:318)
>>       at  
>> org.apache.hadoop.hbase.HTable.getRegionLocation(HTable.java:114)
>>       at
>> org.apache.hadoop.hbase.HTable 
>> $ServerCallable.instantiateServer(HTable.java:1009)
>>       at
>> org 
>> .apache.hadoop.hbase.HTable.getRegionServerWithRetries(HTable.java: 
>> 1024)
>>       at org.apache.hadoop.hbase.HTable.commit(HTable.java:763)
>>       at org.apache.hadoop.hbase.HTable.commit(HTable.java:744)
>>
>> I usually only see it after truncating our table like this:
>> disable tableName;
>> truncate table tableName;
>> enable tableName;
>>
>> In our process that does the inserts we see it hang for a while on  
>> the
>> first insert until it gets this error, and then starts inserting  
>> records
>> with no problem.
>>
>> Is this something I should be concerned with?
>> I am not familiar enough with what goes on 'under the hood' to know  
>> what
>> this error is trying to tell me.
>>
>> Hadoop version: 0.16.2
>> HBase version: 0.1.3
>>
>>
>> Thanks
>>
>> -Preston
>>


Re: .META error when I try to insert after truncating table

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Preston,

You should definitively upgrade to HBase 0.2.

J-D

On Fri, Sep 12, 2008 at 1:06 PM, Preston Price <pr...@strands.com> wrote:

> I see this error every once in a while in our client logs:
>
> java.io.IOException: HRegionInfo was null or empty in .META.
>        at
> org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegionInMeta(HConnectionManager.java:429)
>        at
> org.apache.hadoop.hbase.HConnectionManager$TableServers.locateRegion(HConnectionManager.java:350)
>        at
> org.apache.hadoop.hbase.HConnectionManager$TableServers.relocateRegion(HConnectionManager.java:318)
>        at org.apache.hadoop.hbase.HTable.getRegionLocation(HTable.java:114)
>        at
> org.apache.hadoop.hbase.HTable$ServerCallable.instantiateServer(HTable.java:1009)
>        at
> org.apache.hadoop.hbase.HTable.getRegionServerWithRetries(HTable.java:1024)
>        at org.apache.hadoop.hbase.HTable.commit(HTable.java:763)
>        at org.apache.hadoop.hbase.HTable.commit(HTable.java:744)
>
> I usually only see it after truncating our table like this:
> disable tableName;
> truncate table tableName;
> enable tableName;
>
> In our process that does the inserts we see it hang for a while on the
> first insert until it gets this error, and then starts inserting records
> with no problem.
>
> Is this something I should be concerned with?
> I am not familiar enough with what goes on 'under the hood' to know what
> this error is trying to tell me.
>
> Hadoop version: 0.16.2
> HBase version: 0.1.3
>
>
> Thanks
>
> -Preston
>

.META error when I try to insert after truncating table

Posted by Preston Price <pr...@strands.com>.
I see this error every once in a while in our client logs:

java.io.IOException: HRegionInfo was null or empty in .META.
         at org.apache.hadoop.hbase.HConnectionManager 
$TableServers.locateRegionInMeta(HConnectionManager.java:429)
         at org.apache.hadoop.hbase.HConnectionManager 
$TableServers.locateRegion(HConnectionManager.java:350)
         at org.apache.hadoop.hbase.HConnectionManager 
$TableServers.relocateRegion(HConnectionManager.java:318)
         at  
org.apache.hadoop.hbase.HTable.getRegionLocation(HTable.java:114)
         at org.apache.hadoop.hbase.HTable 
$ServerCallable.instantiateServer(HTable.java:1009)
         at  
org.apache.hadoop.hbase.HTable.getRegionServerWithRetries(HTable.java: 
1024)
         at org.apache.hadoop.hbase.HTable.commit(HTable.java:763)
         at org.apache.hadoop.hbase.HTable.commit(HTable.java:744)

I usually only see it after truncating our table like this:
disable tableName;
truncate table tableName;
enable tableName;

In our process that does the inserts we see it hang for a while on the  
first insert until it gets this error, and then starts inserting  
records with no problem.

Is this something I should be concerned with?
I am not familiar enough with what goes on 'under the hood' to know  
what this error is trying to tell me.

Hadoop version: 0.16.2
HBase version: 0.1.3


Thanks

-Preston

RE: setting up hbase develop environment

Posted by Jim Kellerman <ji...@powerset.com>.
If you are planning to make changes, then you should check out the source trees for hbase and hadoop:

http://svn.apache.org/repos/asf/hadoop/hbase/trunk (for hbase)
http://svn.apache.org/repos/asf/hadoop/core/trunk (for hadoop)

---
Jim Kellerman, Senior Software Development Engineer
Powerset (Live Search, Microsoft Corporation)


> -----Original Message-----
> From: Ski Gh3 [mailto:skigh3@gmail.com]
> Sent: Thursday, September 11, 2008 5:10 PM
> To: hbase-user@hadoop.apache.org
> Subject: setting up hbase develop environment
>
> Hi all,
>
> I am a new comer to hbase/hadoop and I'm a little confused about setting up
> the developing environment.
>
> I thought hadoop comes with hbase (in contrib folder or so) but it's not in
> the hadoop version I downloaded (0.18.0). So should I download hbase
> seperately?
> Then how can i put them into a single project and build together? (since i
> am eventually interested in hbase, but may also want to make changes to
> hadoop if needed).
>
>
>
> Thanks!