You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Yair Even-Zohar <ya...@revenuescience.com> on 2009/02/06 14:42:36 UTC

backup tables using ImportMR / ExportMR ( HBASE-974 )

I'm trying to backup tables using this code. Can you please explain what
is HBaseRef.class?

I found no reference in the API of the source code and could find little
reference online.

Thanks
-Yair

Re: backup tables using ImportMR / ExportMR ( HBASE-974 )

Posted by Erik Holstad <er...@gmail.com>.
Hey Yair!
Ran a test Export and a test Import this morning and except from the fact
that they were not the
fastest on the planet :) they worked just fine. The only thing that I needed
to change was to remove
the dependency of the HBaseRef from the makeImportJar.sh.

Not really sure what you mean by the reducer class not being set in the job
conf? Calling the method
TableMapReduceUtil.initTableReduceJob(outputTable, MyReducer.class, c) does
just that, no?
Or do you not want to use that method and just setting it yourself?
Or are you talking about the difference, in the Importer now, between
setting up the input and the output?

Regards Erik

RE: backup tables using ImportMR / ExportMR ( HBASE-974 )

Posted by Yair Even-Zohar <ya...@revenuescience.com>.
Alas, I'm afraid the problem is in the ImporterMR

My cluster is loading data to Hbase from a different set of fuiles using
the TableReduce so I know it is not the cluster setup.
I also output the Hbase data to a text file (so I am guaranteed that the
data is inserted to the cluster)

There could be some miscommunication between my cluster setup and the
Importer. I don't know what is that miscommunication though.

Another small detail, the importer does not set the reducer class in the
JobConf... I tried doing that too, didn't help :-(

I think this is not a big deal because the reducer class is defined in
the initRecudetask but that's one more thing to keep in mind.

Thanks
-Yair

-----Original Message-----
From: Erik Holstad [mailto:erikholstad@gmail.com] 
Sent: Tuesday, February 10, 2009 11:04 PM
To: hbase-user@hadoop.apache.org
Subject: Re: backup tables using ImportMR / ExportMR ( HBASE-974 )

Hey Yair!

So you are saying that you don't think that the problem is in the
importer
but in your cluster
setup?

Thanks for finding all these small things, like with the @Override for
example.
I haven't used the code in a little while, but will get my hands dirty
tomorrow morning, so we
can figure this out and get it working for you, my test cluster is down
at
the moment but will
hopefully be up tomorrow :)

So, maybe you can some hang out on the IRC and we will try to get this
going
tomorrow?

Regards Erik

Re: backup tables using ImportMR / ExportMR ( HBASE-974 )

Posted by Erik Holstad <er...@gmail.com>.
Hey Yair!

So you are saying that you don't think that the problem is in the importer
but in your cluster
setup?

Thanks for finding all these small things, like with the @Override for
example.
I haven't used the code in a little while, but will get my hands dirty
tomorrow morning, so we
can figure this out and get it working for you, my test cluster is down at
the moment but will
hopefully be up tomorrow :)

So, maybe you can some hang out on the IRC and we will try to get this going
tomorrow?

Regards Erik

RE: backup tables using ImportMR / ExportMR ( HBASE-974 )

Posted by Yair Even-Zohar <ya...@revenuescience.com>.
I was afraid that's the problem with the importer but I verified that
the JobConf gets the correct address for the master (running on EC2).
It could be that the slaves don't get connected to the master correctly
but I see no way this happens.

One more thing: looking at the ImportMR code in eclipse it seems that I
have to remove the "@Override" from the reducer because TableReduce is
abstract in 0.19

Thanks
-Yair

-----Original Message-----
From: Erik Holstad [mailto:erikholstad@gmail.com] 
Sent: Saturday, February 07, 2009 12:31 AM
To: hbase-user@hadoop.apache.org
Subject: Re: backup tables using ImportMR / ExportMR ( HBASE-974 )

Hey Yair!
Answers in line.


> 1) I had to replace the "new Configuration()" to "new
> HBaseConfiguration()" in the java source or the Export didn't work
> properly.

This is probably due to the fact that the api was changed after I used
it
last.


>
>
> 2) I had to add hadoop jar and hbase jar to the classpath in the
> make.....jar.sh or they wouldn't compile

We have these setting globally so I didn't think about it

If you can post your updates on these files that would be great, or if
you
send them
to me, I will put them up.


>
>
> 3) When running the ImportMR.sh, I always get the following error
after
> 100% map and 40% or 66% reduce. Please let me know if you are familiar
> with the problem
> Thanks
> -Yair
>
> 09/02/06 15:57:52 INFO mapred.JobClient:  map 100% reduce 66%
> 09/02/06 16:00:47 INFO mapred.JobClient:  map 100% reduce 53%
> 09/02/06 16:00:47 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000000_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at
org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)
>
> attempt_200902061529_0007_r_000000_0: Exception in thread "Timer
thread
> for monitoring mapred" java.lang.NullPointerException
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
> xt.java:195)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
> xt.java:138)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
> xt.java:123)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
> tMetricsContext.java:304)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
> MetricsContext.java:290)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
> MetricsContext.java:50)
> attempt_200902061529_0007_r_000000_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
> csContext.java:249)
> attempt_200902061529_0007_r_000000_0:   at
> java.util.TimerThread.mainLoop(Timer.java:512)
> attempt_200902061529_0007_r_000000_0:   at
> java.util.TimerThread.run(Timer.java:462)
> 09/02/06 16:00:48 INFO mapred.JobClient:  map 100% reduce 13%
> 09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000002_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at
org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)
>
> attempt_200902061529_0007_r_000002_0: Exception in thread "Timer
thread
> for monitoring mapred" java.lang.NullPointerException
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
> xt.java:195)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
> xt.java:138)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
> xt.java:123)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
> tMetricsContext.java:304)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
> MetricsContext.java:290)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
> MetricsContext.java:50)
> attempt_200902061529_0007_r_000002_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
> csContext.java:249)
> attempt_200902061529_0007_r_000002_0:   at
> java.util.TimerThread.mainLoop(Timer.java:512)
> attempt_200902061529_0007_r_000002_0:   at
> java.util.TimerThread.run(Timer.java:462)
> 09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000001_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at
org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)
>
> attempt_200902061529_0007_r_000001_0: Exception in thread "Timer
thread
> for monitoring mapred" java.lang.NullPointerException
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
> xt.java:195)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
> xt.java:138)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
> xt.java:123)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
> tMetricsContext.java:304)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
> MetricsContext.java:290)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
> MetricsContext.java:50)
> attempt_200902061529_0007_r_000001_0:   at
>
org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
> csContext.java:249)
> attempt_200902061529_0007_r_000001_0:   at
> java.util.TimerThread.mainLoop(Timer.java:512)
> attempt_200902061529_0007_r_000001_0:   at
> java.util.TimerThread.run(Timer.java:462)
> 09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000003_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
>
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at
org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)

Looks like the importer can't access HBase, do you have a copy of
hbase-site
in the
import library or some other way for it to find the master?

Regards Erik

Re: backup tables using ImportMR / ExportMR ( HBASE-974 )

Posted by Erik Holstad <er...@gmail.com>.
Hey Yair!
Answers in line.


> 1) I had to replace the "new Configuration()" to "new
> HBaseConfiguration()" in the java source or the Export didn't work
> properly.

This is probably due to the fact that the api was changed after I used it
last.


>
>
> 2) I had to add hadoop jar and hbase jar to the classpath in the
> make.....jar.sh or they wouldn't compile

We have these setting globally so I didn't think about it

If you can post your updates on these files that would be great, or if you
send them
to me, I will put them up.


>
>
> 3) When running the ImportMR.sh, I always get the following error after
> 100% map and 40% or 66% reduce. Please let me know if you are familiar
> with the problem
> Thanks
> -Yair
>
> 09/02/06 15:57:52 INFO mapred.JobClient:  map 100% reduce 66%
> 09/02/06 16:00:47 INFO mapred.JobClient:  map 100% reduce 53%
> 09/02/06 16:00:47 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000000_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)
>
> attempt_200902061529_0007_r_000000_0: Exception in thread "Timer thread
> for monitoring mapred" java.lang.NullPointerException
> attempt_200902061529_0007_r_000000_0:   at
> org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
> xt.java:195)
> attempt_200902061529_0007_r_000000_0:   at
> org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
> xt.java:138)
> attempt_200902061529_0007_r_000000_0:   at
> org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
> xt.java:123)
> attempt_200902061529_0007_r_000000_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
> tMetricsContext.java:304)
> attempt_200902061529_0007_r_000000_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
> MetricsContext.java:290)
> attempt_200902061529_0007_r_000000_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
> MetricsContext.java:50)
> attempt_200902061529_0007_r_000000_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
> csContext.java:249)
> attempt_200902061529_0007_r_000000_0:   at
> java.util.TimerThread.mainLoop(Timer.java:512)
> attempt_200902061529_0007_r_000000_0:   at
> java.util.TimerThread.run(Timer.java:462)
> 09/02/06 16:00:48 INFO mapred.JobClient:  map 100% reduce 13%
> 09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000002_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)
>
> attempt_200902061529_0007_r_000002_0: Exception in thread "Timer thread
> for monitoring mapred" java.lang.NullPointerException
> attempt_200902061529_0007_r_000002_0:   at
> org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
> xt.java:195)
> attempt_200902061529_0007_r_000002_0:   at
> org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
> xt.java:138)
> attempt_200902061529_0007_r_000002_0:   at
> org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
> xt.java:123)
> attempt_200902061529_0007_r_000002_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
> tMetricsContext.java:304)
> attempt_200902061529_0007_r_000002_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
> MetricsContext.java:290)
> attempt_200902061529_0007_r_000002_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
> MetricsContext.java:50)
> attempt_200902061529_0007_r_000002_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
> csContext.java:249)
> attempt_200902061529_0007_r_000002_0:   at
> java.util.TimerThread.mainLoop(Timer.java:512)
> attempt_200902061529_0007_r_000002_0:   at
> java.util.TimerThread.run(Timer.java:462)
> 09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000001_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)
>
> attempt_200902061529_0007_r_000001_0: Exception in thread "Timer thread
> for monitoring mapred" java.lang.NullPointerException
> attempt_200902061529_0007_r_000001_0:   at
> org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
> xt.java:195)
> attempt_200902061529_0007_r_000001_0:   at
> org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
> xt.java:138)
> attempt_200902061529_0007_r_000001_0:   at
> org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
> xt.java:123)
> attempt_200902061529_0007_r_000001_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
> tMetricsContext.java:304)
> attempt_200902061529_0007_r_000001_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
> MetricsContext.java:290)
> attempt_200902061529_0007_r_000001_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
> MetricsContext.java:50)
> attempt_200902061529_0007_r_000001_0:   at
> org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
> csContext.java:249)
> attempt_200902061529_0007_r_000001_0:   at
> java.util.TimerThread.mainLoop(Timer.java:512)
> attempt_200902061529_0007_r_000001_0:   at
> java.util.TimerThread.run(Timer.java:462)
> 09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
> attempt_200902061529_0007_r_000003_0, Status : FAILED
> org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
> (HConnectionManager.java:236)
>        at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
> ion(HConnectionManager.java:422)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
>        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
>        at ImportMR$MyReducer.reduce(ImportMR.java:138)
>        at ImportMR$MyReducer.reduce(ImportMR.java:128)
>        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
>        at org.apache.hadoop.mapred.Child.main(Child.java:155)

Looks like the importer can't access HBase, do you have a copy of hbase-site
in the
import library or some other way for it to find the master?

Regards Erik

RE: backup tables using ImportMR / ExportMR ( HBASE-974 )

Posted by Yair Even-Zohar <ya...@revenuescience.com>.
Thanks for the quick reply. Several comments
1) I had to replace the "new Configuration()" to "new
HBaseConfiguration()" in the java source or the Export didn't work
properly.

2) I had to add hadoop jar and hbase jar to the classpath in the
make.....jar.sh or they wouldn't compile

3) When running the ImportMR.sh, I always get the following error after
100% map and 40% or 66% reduce. Please let me know if you are familiar
with the problem 
Thanks
-Yair

09/02/06 15:57:52 INFO mapred.JobClient:  map 100% reduce 66%
09/02/06 16:00:47 INFO mapred.JobClient:  map 100% reduce 53%
09/02/06 16:00:47 INFO mapred.JobClient: Task Id :
attempt_200902061529_0007_r_000000_0, Status : FAILED
org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
(HConnectionManager.java:236)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
ion(HConnectionManager.java:422)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
        at ImportMR$MyReducer.reduce(ImportMR.java:138)
        at ImportMR$MyReducer.reduce(ImportMR.java:128)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
        at org.apache.hadoop.mapred.Child.main(Child.java:155)

attempt_200902061529_0007_r_000000_0: Exception in thread "Timer thread
for monitoring mapred" java.lang.NullPointerException
attempt_200902061529_0007_r_000000_0:   at
org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
xt.java:195)
attempt_200902061529_0007_r_000000_0:   at
org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
xt.java:138)
attempt_200902061529_0007_r_000000_0:   at
org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
xt.java:123)
attempt_200902061529_0007_r_000000_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
tMetricsContext.java:304)
attempt_200902061529_0007_r_000000_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
MetricsContext.java:290)
attempt_200902061529_0007_r_000000_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
MetricsContext.java:50)
attempt_200902061529_0007_r_000000_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
csContext.java:249)
attempt_200902061529_0007_r_000000_0:   at
java.util.TimerThread.mainLoop(Timer.java:512)
attempt_200902061529_0007_r_000000_0:   at
java.util.TimerThread.run(Timer.java:462)
09/02/06 16:00:48 INFO mapred.JobClient:  map 100% reduce 13%
09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
attempt_200902061529_0007_r_000002_0, Status : FAILED
org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
(HConnectionManager.java:236)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
ion(HConnectionManager.java:422)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
        at ImportMR$MyReducer.reduce(ImportMR.java:138)
        at ImportMR$MyReducer.reduce(ImportMR.java:128)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
        at org.apache.hadoop.mapred.Child.main(Child.java:155)

attempt_200902061529_0007_r_000002_0: Exception in thread "Timer thread
for monitoring mapred" java.lang.NullPointerException
attempt_200902061529_0007_r_000002_0:   at
org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
xt.java:195)
attempt_200902061529_0007_r_000002_0:   at
org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
xt.java:138)
attempt_200902061529_0007_r_000002_0:   at
org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
xt.java:123)
attempt_200902061529_0007_r_000002_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
tMetricsContext.java:304)
attempt_200902061529_0007_r_000002_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
MetricsContext.java:290)
attempt_200902061529_0007_r_000002_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
MetricsContext.java:50)
attempt_200902061529_0007_r_000002_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
csContext.java:249)
attempt_200902061529_0007_r_000002_0:   at
java.util.TimerThread.mainLoop(Timer.java:512)
attempt_200902061529_0007_r_000002_0:   at
java.util.TimerThread.run(Timer.java:462)
09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
attempt_200902061529_0007_r_000001_0, Status : FAILED
org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
(HConnectionManager.java:236)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
ion(HConnectionManager.java:422)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
        at ImportMR$MyReducer.reduce(ImportMR.java:138)
        at ImportMR$MyReducer.reduce(ImportMR.java:128)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
        at org.apache.hadoop.mapred.Child.main(Child.java:155)

attempt_200902061529_0007_r_000001_0: Exception in thread "Timer thread
for monitoring mapred" java.lang.NullPointerException
attempt_200902061529_0007_r_000001_0:   at
org.apache.hadoop.metrics.ganglia.GangliaContext.xdr_string(GangliaConte
xt.java:195)
attempt_200902061529_0007_r_000001_0:   at
org.apache.hadoop.metrics.ganglia.GangliaContext.emitMetric(GangliaConte
xt.java:138)
attempt_200902061529_0007_r_000001_0:   at
org.apache.hadoop.metrics.ganglia.GangliaContext.emitRecord(GangliaConte
xt.java:123)
attempt_200902061529_0007_r_000001_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext.emitRecords(Abstrac
tMetricsContext.java:304)
attempt_200902061529_0007_r_000001_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext.timerEvent(Abstract
MetricsContext.java:290)
attempt_200902061529_0007_r_000001_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext.access$000(Abstract
MetricsContext.java:50)
attempt_200902061529_0007_r_000001_0:   at
org.apache.hadoop.metrics.spi.AbstractMetricsContext$1.run(AbstractMetri
csContext.java:249)
attempt_200902061529_0007_r_000001_0:   at
java.util.TimerThread.mainLoop(Timer.java:512)
attempt_200902061529_0007_r_000001_0:   at
java.util.TimerThread.run(Timer.java:462)
09/02/06 16:00:48 INFO mapred.JobClient: Task Id :
attempt_200902061529_0007_r_000003_0, Status : FAILED
org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getMaster
(HConnectionManager.java:236)
        at
org.apache.hadoop.hbase.client.HConnectionManager$TableServers.locateReg
ion(HConnectionManager.java:422)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:114)
        at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:74)
        at ImportMR$MyReducer.reduce(ImportMR.java:138)
        at ImportMR$MyReducer.reduce(ImportMR.java:128)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:430)
        at org.apache.hadoop.mapred.Child.main(Child.java:155)

-----Original Message-----
From: Erik Holstad [mailto:erikholstad@gmail.com] 
Sent: Friday, February 06, 2009 7:51 PM
To: hbase-user@hadoop.apache.org
Subject: Re: backup tables using ImportMR / ExportMR ( HBASE-974 )

Hey Yair!
Sorry about that, HBaseRef is not needed for the import. I Deleted the
makeJar file, removed the code
and uploaded a new version. SO you can just remove it in your code or
download the new version.

If you have any more questions, please let me know.

Regards Erik

Re: backup tables using ImportMR / ExportMR ( HBASE-974 )

Posted by Erik Holstad <er...@gmail.com>.
Hey Yair!
Sorry about that, HBaseRef is not needed for the import. I Deleted the
makeJar file, removed the code
and uploaded a new version. SO you can just remove it in your code or
download the new version.

If you have any more questions, please let me know.

Regards Erik