You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by W <wi...@gmail.com> on 2009/02/02 10:15:53 UTC

hbase updgrade

Dear All,

I start fresh install of hadoop & hbase 0.19, when i start hbase i got
this error :

java.io.IOException: File system version file hbase.version does not
exist. No upgrade possible. See
http://wiki.apache.org/hadoop/Hbase/HowToMigrate for more information.
	at org.apache.hadoop.hbase.util.Migrate.run(Migrate.java:175)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)

I didn't find ant info on the net ..

Please help me solving this problem..

Regards,
Wildan

-- 
---
tobeThink!
www.tobethink.com

Aligning IT and Education

>> 021-99325243
Y! : hawking_123
Linkedln : http://www.linkedin.com/in/wildanmaulana

Re: hbase updgrade

Posted by W <wi...@gmail.com>.
On Wed, Feb 4, 2009 at 10:38 AM, Jean-Daniel Cryans <jd...@apache.org> wrote:
> Wildan,
>
>> here is the stack trace, this happen when we create the directory
>> manually using hadoop dfs -mkdir  :
>
> As I said in my first answer, the Getting Started page states that you
> should not create the directory manually, please delete it and try again.
>

Yes, i already did it and it works ..., thanks

-- 
---
tobeThink!
www.tobethink.com

Aligning IT and Education

>> 021-99325243
Y! : hawking_123
Linkedln : http://www.linkedin.com/in/wildanmaulana

Re: hbase updgrade

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Wildan,

> here is the stack trace, this happen when we create the directory
> manually using hadoop dfs -mkdir  :

As I said in my first answer, the Getting Started page states that you
should not create the directory manually, please delete it and try again.


J-D

On Tue, Feb 3, 2009 at 10:34 PM, W <wi...@gmail.com> wrote:

> On Tue, Feb 3, 2009 at 8:52 PM, Jean-Daniel Cryans <jd...@apache.org>
> wrote:
> > I'm having a hard time following you here... So you solved your Upgrade
> > problem? It would seem so.
>
> Sorry ..,
>
> Yes ., i have solved it, don't know why, but after the sccond hbase
> start, everythings running smoothly.
>
> > You say here that your NN is on port 54310 but it is commented in your
> > hbase-site.xml, why?
>
> It's the old configuration, i use 38440 as port for namenode coz on
> the first look on the jps output to
> see the PID and comparing with PID on port number on netstat output,
> but then i saw on namenode log
> the NN port was 54310, so delete again the 38440 and use 54310 instead
> as port number of NN.
>
>
> > You also talk about a null pointer, can you post the stack trace?
> >
>
> here is the stack trace, this happen when we create the directory
> manually using hadoop dfs -mkdir  :
>
> ---cut-----------
> 2009-01-29 16:58:59,301 INFO org.apache.hadoop.hbase.master.HMaster:
> vmName=Java HotSpot(TM) Server VM, vmVendor=Sun Microsystems Inc.,
> vmVersion=11.0-b15
> 2009-01-29 16:58:59,302 INFO org.apache.hadoop.hbase.master.HMaster:
> vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError,
> -Dhbase.log.dir=/opt/hbase/bin/../logs,
> -Dhbase.log.file=hbase-hadoop-master-tobeThink.log,
> -Dhbase.home.dir=/opt/hbase/bin/.., -Dhbase.id.str=hadoop,
> -Dhbase.root.logger=INFO,DRFA,
> -Djava.library.path=/opt/hbase/bin/../lib/native/Linux-i386-32]
> 2009-01-29 16:58:59,744 ERROR org.apache.hadoop.hbase.master.HMaster:
> Can not start master
> java.io.IOException: Call to
> tobethink.pappiptek.lipi.go.id/192.168.107.119:54310 failed on local
> exception: null
> at org.apache.hadoop.ipc.Client.call(Client.java:699)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
> at $Proxy0.getProtocolVersion(Unknown Source)
> at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
> at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:104)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:74)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1367)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:56)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1379)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:215)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:120)
> at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:186)
> at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:156)
> at
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:96)
> at
> org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:7 8)
> at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:97 8)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1022)
> Caused by: java.io.EOFException
> at java.io.DataInputStream.readInt(DataInputStream.java:375)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:493)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:43 8)
> ------cut--------------
>
> I hope that clear enough, Thanks!
>
> Regards,
> Widan
>
> --
> ---
> tobeThink!
> www.tobethink.com
>
> Aligning IT and Education
>
> >> 021-99325243
> Y! : hawking_123
> Linkedln : http://www.linkedin.com/in/wildanmaulana
>

Re: hbase updgrade

Posted by W <wi...@gmail.com>.
On Tue, Feb 3, 2009 at 8:52 PM, Jean-Daniel Cryans <jd...@apache.org> wrote:
> I'm having a hard time following you here... So you solved your Upgrade
> problem? It would seem so.

Sorry ..,

Yes ., i have solved it, don't know why, but after the sccond hbase
start, everythings running smoothly.

> You say here that your NN is on port 54310 but it is commented in your
> hbase-site.xml, why?

It's the old configuration, i use 38440 as port for namenode coz on
the first look on the jps output to
see the PID and comparing with PID on port number on netstat output,
but then i saw on namenode log
the NN port was 54310, so delete again the 38440 and use 54310 instead
as port number of NN.


> You also talk about a null pointer, can you post the stack trace?
>

here is the stack trace, this happen when we create the directory
manually using hadoop dfs -mkdir  :

---cut-----------
2009-01-29 16:58:59,301 INFO org.apache.hadoop.hbase.master.HMaster:
vmName=Java HotSpot(TM) Server VM, vmVendor=Sun Microsystems Inc.,
vmVersion=11.0-b15
2009-01-29 16:58:59,302 INFO org.apache.hadoop.hbase.master.HMaster:
vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError,
-Dhbase.log.dir=/opt/hbase/bin/../logs,
-Dhbase.log.file=hbase-hadoop-master-tobeThink.log,
-Dhbase.home.dir=/opt/hbase/bin/.., -Dhbase.id.str=hadoop,
-Dhbase.root.logger=INFO,DRFA,
-Djava.library.path=/opt/hbase/bin/../lib/native/Linux-i386-32]
2009-01-29 16:58:59,744 ERROR org.apache.hadoop.hbase.master.HMaster:
Can not start master
java.io.IOException: Call to
tobethink.pappiptek.lipi.go.id/192.168.107.119:54310 failed on local
exception: null
at org.apache.hadoop.ipc.Client.call(Client.java:699)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
at $Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:104)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:74)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1367)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:56)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1379)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:215)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:120)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:186)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:156)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:96)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:7 8)
at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:97 8)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1022)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:493)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:43 8)
------cut--------------

I hope that clear enough, Thanks!

Regards,
Widan

-- 
---
tobeThink!
www.tobethink.com

Aligning IT and Education

>> 021-99325243
Y! : hawking_123
Linkedln : http://www.linkedin.com/in/wildanmaulana

Re: hbase updgrade

Posted by Jean-Daniel Cryans <jd...@apache.org>.
I'm having a hard time following you here... So you solved your Upgrade
problem? It would seem so.

You say here that your NN is on port 54310 but it is commented in your
hbase-site.xml, why?

You also talk about a null pointer, can you post the stack trace?

Thx,

J-D

On Mon, Feb 2, 2009 at 11:04 PM, W <wi...@gmail.com> wrote:

> Btw,
>
> it's already running .., i'm using namenode port on 54310
>
> Thanks!
>
>
> On Tue, Feb 3, 2009 at 10:52 AM, W <wi...@gmail.com> wrote:
> > Hello Daniel,
> >
> > I already read the Getting started page, i get null exception when
> > starting hbase here is my hbase-site.xml :
>
> --
> ---
> tobeThink!
> www.tobethink.com
>
> Aligning IT and Education
>
> >> 021-99325243
> Y! : hawking_123
> Linkedln : http://www.linkedin.com/in/wildanmaulana
>

Re: hbase updgrade

Posted by W <wi...@gmail.com>.
Btw,

it's already running .., i'm using namenode port on 54310

Thanks!


On Tue, Feb 3, 2009 at 10:52 AM, W <wi...@gmail.com> wrote:
> Hello Daniel,
>
> I already read the Getting started page, i get null exception when
> starting hbase here is my hbase-site.xml :

-- 
---
tobeThink!
www.tobethink.com

Aligning IT and Education

>> 021-99325243
Y! : hawking_123
Linkedln : http://www.linkedin.com/in/wildanmaulana

Re: hbase updgrade

Posted by W <wi...@gmail.com>.
Hello Daniel,

I already read the Getting started page, i get null exception when
starting hbase here is my hbase-site.xml :

--cut------------
<configuration>
  <property>
    <name>hbase.rootdir</name>
   <!--<value>hdfs://tobethink.pappiptek.lipi.go.id:54310/user/hadoop/hbase</value>-->
    <value>hdfs://tobethink.pappiptek.lipi.go.id:38400/user/hadoop/hbase</value>
    <description>The directory shared by region servers.
    Should be fully-qualified to include the filesystem to use.
    E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
    </description>
  </property>

<!--
<property>
    <name>hbase.master</name>
    <value>http://tobethink.pappiptek.lipi.go.id::60000</value>
    <description>The host and port that the HBase master runs at.
    A value of 'local' runs the master and a regionserver in
    a single process.
    </description>
  </property>
-->


</configuration>
------cut-----------------

I'm trying the pseudo distributed mode first ...


Btw, i know the namenode port from the jps and netstat output.

Thanks for the help.

Regards,
Wildan


Regards,
Wildan

On Mon, Feb 2, 2009 at 8:58 PM, Jean-Daniel Cryans <jd...@apache.org> wrote:
> Wildan,
>
> Make sure you followed this from the Getting Started page :
>
> "Note: Let hbase create the directory. If you don't, you'll get warning
> saying hbase needs a migration run because the directory is missing files
> expected by hbase (it'll create them if you let it)."
>
>
> J-D
-- 
---
tobeThink!
www.tobethink.com

Aligning IT and Education

>> 021-99325243
Y! : hawking_123
Linkedln : http://www.linkedin.com/in/wildanmaulana

Re: hbase updgrade

Posted by Jean-Daniel Cryans <jd...@apache.org>.
Wildan,

Make sure you followed this from the Getting Started page :

"Note: Let hbase create the directory. If you don't, you'll get warning
saying hbase needs a migration run because the directory is missing files
expected by hbase (it'll create them if you let it)."


J-D

On Mon, Feb 2, 2009 at 4:15 AM, W <wi...@gmail.com> wrote:

> Dear All,
>
> I start fresh install of hadoop & hbase 0.19, when i start hbase i got
> this error :
>
> java.io.IOException: File system version file hbase.version does not
> exist. No upgrade possible. See
> http://wiki.apache.org/hadoop/Hbase/HowToMigrate for more information.
>        at org.apache.hadoop.hbase.util.Migrate.run(Migrate.java:175)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>
> I didn't find ant info on the net ..
>
> Please help me solving this problem..
>
> Regards,
> Wildan
>
> --
> ---
> tobeThink!
> www.tobethink.com
>
> Aligning IT and Education
>
> >> 021-99325243
> Y! : hawking_123
> Linkedln : http://www.linkedin.com/in/wildanmaulana
>