You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Manjeet Singh <ma...@gmail.com> on 2017/10/23 09:35:24 UTC

hbase data migration from one cluster to another cluster on different versions

Hi All,

I have query regarding hbase data migration from one cluster to another
cluster in same N/W, but with a different version of hbase one is 0.94.27
(source cluster hbase) and another is destination cluster hbase version is
1.2.1.

I have used below command to take backup of hbase table on source cluster
is:
 ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
/data/backupData/

below files were genrated by using above command:-


drwxr-xr-x 3 root root        4096 Dec  9  2016 _logs
-rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-00000
-rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-00001
-rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-00002
-rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-00003
-rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-00004
-rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-00005
-rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-00006
-rw-r--r-- 1 root root           0 Dec 16  2016 _SUCCESS


in order to restore these files I am assuming I have to move these files in
destination cluster and have to run below command

hbase org.apache.hadoop.hbase.mapreduce.Import <tablename> /data/backupData/

Please suggest if I am on correct direction, second if anyone have another
option.
I have tryed this with test data but above command took very long time and
at end it gets fails

17/10/23 11:54:21 INFO mapred.JobClient:  map 0% reduce 0%
17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
attempt_201710131340_0355_m_000002_0, Status : FAILED
Task attempt_201710131340_0355_m_000002_0 failed to report status for 600
seconds. Killing!


Thanks
Manjeet Singh

Re: hbase data migration from one cluster to another cluster on different versions

Posted by Manjeet Singh <ma...@gmail.com>.
Dear Yang, Anil and Enric,

I am now able to execute import command

below are the steps

as Hadoop version is different as well as Hbase version also differ from
source and destination
so First I used export command on destination cluster which by default move
data to defined HDFS location
I took that data to my local (because Hadoop version is differ sodistcp not
worked)\\\

I used scp command and put data in destination cluster linux environment
and I put that into hdfs

then i used below command as suggested by Yang

sudo -u hdfs hbase -Dhbase.import.version=0.94
org.apache.hadoop.hbase.mapreduce.Import
test_table /data/ExportedFiles

It's worked and data migrated.

Thanks all

Manjeet Singh


sudo -u hdfs hbase -Dhbase.import.version=0.94
org.apache.hadoop.hbase.mapreduce.Import test_table /data/ExportedFiles





On Thu, Oct 26, 2017 at 12:15 PM, anil gupta <an...@gmail.com> wrote:

> This blogpost from Pinterest can provide you more pointers:
> https://medium.com/@Pinterest_Engineering/upgrading-
> pinterest-to-hbase-1-2-from-0-94-e6e34c157783
>
> On Wed, Oct 25, 2017 at 11:28 PM, Enrico Olivelli <eo...@gmail.com>
> wrote:
>
> > I had the same problem.
> > I wrote a procedure to move data from 0.94 to 1.x.
> > On github I have a prototype, which is not really the procedure used in
> my
> > company but it can be useful to take as example
> >
> > https://github.com/eolivelli/multi-hbase
> >
> > Hope that helps
> > Enrico
> >
> > Il gio 26 ott 2017, 08:24 Yung-An He <ma...@gmail.com> ha scritto:
> >
> > > Hi Manjeet,
> > >
> > > I am sorry that I misunderstood your question.
> > >
> > > The hbase book http://hbase.apache.org/book.html#_upgrade_paths
> > describes:
> > > "You must stop your cluster, install the 1.x.x software, run the
> > migration
> > > described at
> > > Executing the 0.96 Upgrade
> > > <http://hbase.apache.org/book.html#executing.the.0.96.upgrade>
> > > (substituting 1.x.x. wherever we make mention of 0.96.x in the section
> > > below),
> > > and then restart. Be sure to upgrade your ZooKeeper if it is a version
> > less
> > > than the required 3.4.x."
> > >
> > > This is what I mean about "the environment ready for the upgrade".
> > > Since the HBase 1.2.1 cluster is the all new cluster, this is the
> > different
> > > situation from yours.
> > >
> > > If the contents of /data/ExportedFiles have been put to the HDFS on
> HBase
> > > 1.2.1 cluster,
> > > try the below command:
> > > sudo -u hdfs hbase -Dhbase.import.version=0.94
> > > org.apache.hadoop.hbase.mapreduce.Import test_table
> /data/ExportedFiles
> > > instead of yours.
> > >
> > > Best Regards.
> > >
> > >
> > > 2017-10-26 13:27 GMT+08:00 Manjeet Singh <ma...@gmail.com>:
> > >
> > > > Furthermore, clarity why I used scp command is :
> > > >
> > > > I have copy source cluster files to destination cluster by using scp
> > > > command and put them into destination cluster HDFS (It's because of
> two
> > > > different version of Haddop  destination cluster hadoop is 1.2.1 and
> > > > destination is having Hadoop 2.0 ) First I get HDFS files to local
> > linux
> > > > and use scp command to put them into destination cluster.
> > > >
> > > > Thanks
> > > > Manjeet Singh
> > > >
> > > > On Thu, Oct 26, 2017 at 10:26 AM, Manjeet Singh <
> > > > manjeet.chandhok@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Yung,
> > > > >
> > > > > First thanks for reply
> > > > > The link provided by you is for upgrading the Hbase version and
> > problem
> > > > > statement is different
> > > > > Problem is when I am trying to export hbase data from one cluster
> to
> > > > > another cluster in same N/W, but with a different hbase version
> i.e.
> > > > > 0.94.27 (source cluster hbase) and another is destination cluster
> > hbase
> > > > > version is 1.2.1.
> > > > > So this link shall be refer
> > > > > http://hbase.apache.org/0.94/book/ops_mgt.html#export
> > > > >
> > > > >
> > > > > for the second point which I forget to mention in mail, I did copy
> > > > contents
> > > > > of /data/ExportedFiles
> > > > > in destination cluster which is having HBase 1.2.1 but not with
> > > > > distcp instead of I used scp command
> > > > > and when I am trying to import data I am getting below error
> > > > >
> > > > > 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> > > > > attempt_1505781444745_0070_m_000003_0, Status : FAILED
> > > > > Error: java.io.IOException: keyvalues=NONE read 2 bytes, should
> read
> > > > 121347
> > > > >         at org.apache.hadoop.io.SequenceFile$Reader.
> getCurrentValue(
> > > > > SequenceFile.java:2306)
> > > > >         at org.apache.hadoop.mapreduce.lib.input.
> > SequenceFileRecordRead
> > > > > er.nextKeyValue(SequenceFileRecordReader.java:78)
> > > > >         at org.apache.hadoop.mapred.MapTask$
> > NewTrackingRecordReader.nex
> > > > > tKeyValue(MapTask.java:556)
> > > > >         at org.apache.hadoop.mapreduce.task.MapContextImpl.
> > nextKeyValue
> > > > > (MapContextImpl.java:80)
> > > > >         at org.apache.hadoop.mapreduce.
> > lib.map.WrappedMapper$Context.ne
> > > > > xtKeyValue(WrappedMapper.java:91)
> > > > >         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> > > > >         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.
> > > > java:787)
> > > > >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> > > > >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.
> > java:164)
> > > > >         at java.security.AccessController.doPrivileged(Native
> > Method)
> > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > >         at org.apache.hadoop.security.UserGroupInformation.doAs(
> > UserGro
> > > > > upInformation.java:1693)
> > > > >         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:
> > 158)
> > > > >
> > > > >
> > > > >
> > > > > can you please elaborate more about  "Is the environment ready for
> > the
> > > > > upgrade?"
> > > > >
> > > > > Thanks
> > > > > Manjeet Singh
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Oct 26, 2017 at 8:32 AM, Yung-An He <ma...@gmail.com>
> > > wrote:
> > > > >
> > > > >> Hi,
> > > > >>
> > > > >> Have you seen the reference guide
> > > > >> <http://hbase.apache.org/book.html#_upgrade_paths> to make sure
> > that
> > > > the
> > > > >> environment is ready for the upgrade?
> > > > >> Perhaps you could try to copy the contents of /data/ExportedFiles
> to
> > > the
> > > > >> HBase 1.2.1 cluster using distcp before import data instead of
> using
> > > > >> "hdfs://<IP>:8020/data/ExportedFiles" directly.
> > > > >> Then create the table on the HBase 1.2.1 cluster using HBase
> Shell.
> > > > Column
> > > > >> families must be identical to the table on the old one.
> > > > >> Finally, import data from /data/ExportedFiles on the HBase 1.2.1
> > > > cluster.
> > > > >>
> > > > >>
> > > > >> Best Regards.
> > > > >>
> > > > >> 2017-10-24 1:27 GMT+08:00 Manjeet Singh <
> manjeet.chandhok@gmail.com
> > >:
> > > > >>
> > > > >> > Hi All,
> > > > >> >
> > > > >> > Can anyone help?
> > > > >> >
> > > > >> > adding few more investigations I have move all files to the
> > > > destination
> > > > >> > cluster hdfs and I have run below command:-
> > > > >> >
> > > > >> > sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import
> > > > test_table
> > > > >> > hdfs://<IP>:8020/data/ExportedFiles
> > > > >> >
> > > > >> > I am getting below error
> > > > >> >
> > > > >> > 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> > > > >> > attempt_1505781444745_0070_m_000003_0, Status : FAILED
> > > > >> > Error: java.io.IOException: keyvalues=NONE read 2 bytes, should
> > read
> > > > >> 121347
> > > > >> >         at
> > > > >> > org.apache.hadoop.io.SequenceFile$Reader.
> > > > getCurrentValue(SequenceFile.
> > > > >> > java:2306)
> > > > >> >         at
> > > > >> > org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.
> > > > >> > nextKeyValue(SequenceFileRecordReader.java:78)
> > > > >> >         at
> > > > >> > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
> > > > >> > nextKeyValue(MapTask.java:556)
> > > > >> >         at
> > > > >> > org.apache.hadoop.mapreduce.task.MapContextImpl.
> > > > >> > nextKeyValue(MapContextImpl.java:80)
> > > > >> >         at
> > > > >> > org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
> > > > >> > nextKeyValue(WrappedMapper.java:91)
> > > > >> >         at org.apache.hadoop.mapreduce.
> > Mapper.run(Mapper.java:144)
> > > > >> >         at org.apache.hadoop.mapred.
> MapTask.runNewMapper(MapTask.
> > > > java:
> > > > >> 787)
> > > > >> >         at org.apache.hadoop.mapred.
> MapTask.run(MapTask.java:341)
> > > > >> >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.
> > > > java:164)
> > > > >> >         at java.security.AccessController.doPrivileged(Native
> > > Method)
> > > > >> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > >> >         at
> > > > >> > org.apache.hadoop.security.UserGroupInformation.doAs(
> > > > >> > UserGroupInformation.java:1693)
> > > > >> >         at org.apache.hadoop.mapred.
> > YarnChild.main(YarnChild.java:
> > > > 158)
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > can anyone suggest how to migrate data?
> > > > >> >
> > > > >> > Thanks
> > > > >> > Manjeet Singh
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > Hi All,
> > > > >> >
> > > > >> > I have query regarding hbase data migration from one cluster to
> > > > another
> > > > >> > cluster in same N/W, but with a different version of hbase one
> is
> > > > >> 0.94.27
> > > > >> > (source cluster hbase) and another is destination cluster hbase
> > > > version
> > > > >> is
> > > > >> > 1.2.1.
> > > > >> >
> > > > >> > I have used below command to take backup of hbase table on
> source
> > > > >> cluster
> > > > >> > is:
> > > > >> >  ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
> > > > >> > /data/backupData/
> > > > >> >
> > > > >> > below files were genrated by using above command:-
> > > > >> >
> > > > >> >
> > > > >> > drwxr-xr-x 3 root root        4096 Dec  9  2016 _logs
> > > > >> > -rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-00000
> > > > >> > -rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-00001
> > > > >> > -rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-00002
> > > > >> > -rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-00003
> > > > >> > -rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-00004
> > > > >> > -rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-00005
> > > > >> > -rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-00006
> > > > >> > -rw-r--r-- 1 root root           0 Dec 16  2016 _SUCCESS
> > > > >> >
> > > > >> >
> > > > >> > in order to restore these files I am assuming I have to move
> these
> > > > >> files in
> > > > >> > destination cluster and have to run below command
> > > > >> >
> > > > >> > hbase org.apache.hadoop.hbase.mapreduce.Import <tablename>
> > > > >> > /data/backupData/
> > > > >> >
> > > > >> > Please suggest if I am on correct direction, second if anyone
> have
> > > > >> another
> > > > >> > option.
> > > > >> > I have tryed this with test data but above command took very
> long
> > > time
> > > > >> and
> > > > >> > at end it gets fails
> > > > >> >
> > > > >> > 17/10/23 11:54:21 INFO mapred.JobClient:  map 0% reduce 0%
> > > > >> > 17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
> > > > >> > attempt_201710131340_0355_m_000002_0, Status : FAILED
> > > > >> > Task attempt_201710131340_0355_m_000002_0 failed to report
> status
> > > for
> > > > >> 600
> > > > >> > seconds. Killing!
> > > > >> >
> > > > >> >
> > > > >> > Thanks
> > > > >> > Manjeet Singh
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >> > --
> > > > >> > luv all
> > > > >> >
> > > > >>
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > luv all
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > luv all
> > > >
> > >
> > --
> >
> >
> > -- Enrico Olivelli
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
luv all

Re: hbase data migration from one cluster to another cluster on different versions

Posted by anil gupta <an...@gmail.com>.
This blogpost from Pinterest can provide you more pointers:
https://medium.com/@Pinterest_Engineering/upgrading-pinterest-to-hbase-1-2-from-0-94-e6e34c157783

On Wed, Oct 25, 2017 at 11:28 PM, Enrico Olivelli <eo...@gmail.com>
wrote:

> I had the same problem.
> I wrote a procedure to move data from 0.94 to 1.x.
> On github I have a prototype, which is not really the procedure used in my
> company but it can be useful to take as example
>
> https://github.com/eolivelli/multi-hbase
>
> Hope that helps
> Enrico
>
> Il gio 26 ott 2017, 08:24 Yung-An He <ma...@gmail.com> ha scritto:
>
> > Hi Manjeet,
> >
> > I am sorry that I misunderstood your question.
> >
> > The hbase book http://hbase.apache.org/book.html#_upgrade_paths
> describes:
> > "You must stop your cluster, install the 1.x.x software, run the
> migration
> > described at
> > Executing the 0.96 Upgrade
> > <http://hbase.apache.org/book.html#executing.the.0.96.upgrade>
> > (substituting 1.x.x. wherever we make mention of 0.96.x in the section
> > below),
> > and then restart. Be sure to upgrade your ZooKeeper if it is a version
> less
> > than the required 3.4.x."
> >
> > This is what I mean about "the environment ready for the upgrade".
> > Since the HBase 1.2.1 cluster is the all new cluster, this is the
> different
> > situation from yours.
> >
> > If the contents of /data/ExportedFiles have been put to the HDFS on HBase
> > 1.2.1 cluster,
> > try the below command:
> > sudo -u hdfs hbase -Dhbase.import.version=0.94
> > org.apache.hadoop.hbase.mapreduce.Import test_table /data/ExportedFiles
> > instead of yours.
> >
> > Best Regards.
> >
> >
> > 2017-10-26 13:27 GMT+08:00 Manjeet Singh <ma...@gmail.com>:
> >
> > > Furthermore, clarity why I used scp command is :
> > >
> > > I have copy source cluster files to destination cluster by using scp
> > > command and put them into destination cluster HDFS (It's because of two
> > > different version of Haddop  destination cluster hadoop is 1.2.1 and
> > > destination is having Hadoop 2.0 ) First I get HDFS files to local
> linux
> > > and use scp command to put them into destination cluster.
> > >
> > > Thanks
> > > Manjeet Singh
> > >
> > > On Thu, Oct 26, 2017 at 10:26 AM, Manjeet Singh <
> > > manjeet.chandhok@gmail.com>
> > > wrote:
> > >
> > > > Hi Yung,
> > > >
> > > > First thanks for reply
> > > > The link provided by you is for upgrading the Hbase version and
> problem
> > > > statement is different
> > > > Problem is when I am trying to export hbase data from one cluster to
> > > > another cluster in same N/W, but with a different hbase version  i.e.
> > > > 0.94.27 (source cluster hbase) and another is destination cluster
> hbase
> > > > version is 1.2.1.
> > > > So this link shall be refer
> > > > http://hbase.apache.org/0.94/book/ops_mgt.html#export
> > > >
> > > >
> > > > for the second point which I forget to mention in mail, I did copy
> > > contents
> > > > of /data/ExportedFiles
> > > > in destination cluster which is having HBase 1.2.1 but not with
> > > > distcp instead of I used scp command
> > > > and when I am trying to import data I am getting below error
> > > >
> > > > 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> > > > attempt_1505781444745_0070_m_000003_0, Status : FAILED
> > > > Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read
> > > 121347
> > > >         at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(
> > > > SequenceFile.java:2306)
> > > >         at org.apache.hadoop.mapreduce.lib.input.
> SequenceFileRecordRead
> > > > er.nextKeyValue(SequenceFileRecordReader.java:78)
> > > >         at org.apache.hadoop.mapred.MapTask$
> NewTrackingRecordReader.nex
> > > > tKeyValue(MapTask.java:556)
> > > >         at org.apache.hadoop.mapreduce.task.MapContextImpl.
> nextKeyValue
> > > > (MapContextImpl.java:80)
> > > >         at org.apache.hadoop.mapreduce.
> lib.map.WrappedMapper$Context.ne
> > > > xtKeyValue(WrappedMapper.java:91)
> > > >         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> > > >         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.
> > > java:787)
> > > >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> > > >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.
> java:164)
> > > >         at java.security.AccessController.doPrivileged(Native
> Method)
> > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > >         at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGro
> > > > upInformation.java:1693)
> > > >         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:
> 158)
> > > >
> > > >
> > > >
> > > > can you please elaborate more about  "Is the environment ready for
> the
> > > > upgrade?"
> > > >
> > > > Thanks
> > > > Manjeet Singh
> > > >
> > > >
> > > >
> > > > On Thu, Oct 26, 2017 at 8:32 AM, Yung-An He <ma...@gmail.com>
> > wrote:
> > > >
> > > >> Hi,
> > > >>
> > > >> Have you seen the reference guide
> > > >> <http://hbase.apache.org/book.html#_upgrade_paths> to make sure
> that
> > > the
> > > >> environment is ready for the upgrade?
> > > >> Perhaps you could try to copy the contents of /data/ExportedFiles to
> > the
> > > >> HBase 1.2.1 cluster using distcp before import data instead of using
> > > >> "hdfs://<IP>:8020/data/ExportedFiles" directly.
> > > >> Then create the table on the HBase 1.2.1 cluster using HBase Shell.
> > > Column
> > > >> families must be identical to the table on the old one.
> > > >> Finally, import data from /data/ExportedFiles on the HBase 1.2.1
> > > cluster.
> > > >>
> > > >>
> > > >> Best Regards.
> > > >>
> > > >> 2017-10-24 1:27 GMT+08:00 Manjeet Singh <manjeet.chandhok@gmail.com
> >:
> > > >>
> > > >> > Hi All,
> > > >> >
> > > >> > Can anyone help?
> > > >> >
> > > >> > adding few more investigations I have move all files to the
> > > destination
> > > >> > cluster hdfs and I have run below command:-
> > > >> >
> > > >> > sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import
> > > test_table
> > > >> > hdfs://<IP>:8020/data/ExportedFiles
> > > >> >
> > > >> > I am getting below error
> > > >> >
> > > >> > 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> > > >> > attempt_1505781444745_0070_m_000003_0, Status : FAILED
> > > >> > Error: java.io.IOException: keyvalues=NONE read 2 bytes, should
> read
> > > >> 121347
> > > >> >         at
> > > >> > org.apache.hadoop.io.SequenceFile$Reader.
> > > getCurrentValue(SequenceFile.
> > > >> > java:2306)
> > > >> >         at
> > > >> > org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.
> > > >> > nextKeyValue(SequenceFileRecordReader.java:78)
> > > >> >         at
> > > >> > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
> > > >> > nextKeyValue(MapTask.java:556)
> > > >> >         at
> > > >> > org.apache.hadoop.mapreduce.task.MapContextImpl.
> > > >> > nextKeyValue(MapContextImpl.java:80)
> > > >> >         at
> > > >> > org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
> > > >> > nextKeyValue(WrappedMapper.java:91)
> > > >> >         at org.apache.hadoop.mapreduce.
> Mapper.run(Mapper.java:144)
> > > >> >         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.
> > > java:
> > > >> 787)
> > > >> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> > > >> >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.
> > > java:164)
> > > >> >         at java.security.AccessController.doPrivileged(Native
> > Method)
> > > >> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > >> >         at
> > > >> > org.apache.hadoop.security.UserGroupInformation.doAs(
> > > >> > UserGroupInformation.java:1693)
> > > >> >         at org.apache.hadoop.mapred.
> YarnChild.main(YarnChild.java:
> > > 158)
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> > can anyone suggest how to migrate data?
> > > >> >
> > > >> > Thanks
> > > >> > Manjeet Singh
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> > Hi All,
> > > >> >
> > > >> > I have query regarding hbase data migration from one cluster to
> > > another
> > > >> > cluster in same N/W, but with a different version of hbase one is
> > > >> 0.94.27
> > > >> > (source cluster hbase) and another is destination cluster hbase
> > > version
> > > >> is
> > > >> > 1.2.1.
> > > >> >
> > > >> > I have used below command to take backup of hbase table on source
> > > >> cluster
> > > >> > is:
> > > >> >  ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
> > > >> > /data/backupData/
> > > >> >
> > > >> > below files were genrated by using above command:-
> > > >> >
> > > >> >
> > > >> > drwxr-xr-x 3 root root        4096 Dec  9  2016 _logs
> > > >> > -rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-00000
> > > >> > -rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-00001
> > > >> > -rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-00002
> > > >> > -rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-00003
> > > >> > -rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-00004
> > > >> > -rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-00005
> > > >> > -rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-00006
> > > >> > -rw-r--r-- 1 root root           0 Dec 16  2016 _SUCCESS
> > > >> >
> > > >> >
> > > >> > in order to restore these files I am assuming I have to move these
> > > >> files in
> > > >> > destination cluster and have to run below command
> > > >> >
> > > >> > hbase org.apache.hadoop.hbase.mapreduce.Import <tablename>
> > > >> > /data/backupData/
> > > >> >
> > > >> > Please suggest if I am on correct direction, second if anyone have
> > > >> another
> > > >> > option.
> > > >> > I have tryed this with test data but above command took very long
> > time
> > > >> and
> > > >> > at end it gets fails
> > > >> >
> > > >> > 17/10/23 11:54:21 INFO mapred.JobClient:  map 0% reduce 0%
> > > >> > 17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
> > > >> > attempt_201710131340_0355_m_000002_0, Status : FAILED
> > > >> > Task attempt_201710131340_0355_m_000002_0 failed to report status
> > for
> > > >> 600
> > > >> > seconds. Killing!
> > > >> >
> > > >> >
> > > >> > Thanks
> > > >> > Manjeet Singh
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> > --
> > > >> > luv all
> > > >> >
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > luv all
> > > >
> > >
> > >
> > >
> > > --
> > > luv all
> > >
> >
> --
>
>
> -- Enrico Olivelli
>



-- 
Thanks & Regards,
Anil Gupta

Re: hbase data migration from one cluster to another cluster on different versions

Posted by Enrico Olivelli <eo...@gmail.com>.
I had the same problem.
I wrote a procedure to move data from 0.94 to 1.x.
On github I have a prototype, which is not really the procedure used in my
company but it can be useful to take as example

https://github.com/eolivelli/multi-hbase

Hope that helps
Enrico

Il gio 26 ott 2017, 08:24 Yung-An He <ma...@gmail.com> ha scritto:

> Hi Manjeet,
>
> I am sorry that I misunderstood your question.
>
> The hbase book http://hbase.apache.org/book.html#_upgrade_paths describes:
> "You must stop your cluster, install the 1.x.x software, run the migration
> described at
> Executing the 0.96 Upgrade
> <http://hbase.apache.org/book.html#executing.the.0.96.upgrade>
> (substituting 1.x.x. wherever we make mention of 0.96.x in the section
> below),
> and then restart. Be sure to upgrade your ZooKeeper if it is a version less
> than the required 3.4.x."
>
> This is what I mean about "the environment ready for the upgrade".
> Since the HBase 1.2.1 cluster is the all new cluster, this is the different
> situation from yours.
>
> If the contents of /data/ExportedFiles have been put to the HDFS on HBase
> 1.2.1 cluster,
> try the below command:
> sudo -u hdfs hbase -Dhbase.import.version=0.94
> org.apache.hadoop.hbase.mapreduce.Import test_table /data/ExportedFiles
> instead of yours.
>
> Best Regards.
>
>
> 2017-10-26 13:27 GMT+08:00 Manjeet Singh <ma...@gmail.com>:
>
> > Furthermore, clarity why I used scp command is :
> >
> > I have copy source cluster files to destination cluster by using scp
> > command and put them into destination cluster HDFS (It's because of two
> > different version of Haddop  destination cluster hadoop is 1.2.1 and
> > destination is having Hadoop 2.0 ) First I get HDFS files to local linux
> > and use scp command to put them into destination cluster.
> >
> > Thanks
> > Manjeet Singh
> >
> > On Thu, Oct 26, 2017 at 10:26 AM, Manjeet Singh <
> > manjeet.chandhok@gmail.com>
> > wrote:
> >
> > > Hi Yung,
> > >
> > > First thanks for reply
> > > The link provided by you is for upgrading the Hbase version and problem
> > > statement is different
> > > Problem is when I am trying to export hbase data from one cluster to
> > > another cluster in same N/W, but with a different hbase version  i.e.
> > > 0.94.27 (source cluster hbase) and another is destination cluster hbase
> > > version is 1.2.1.
> > > So this link shall be refer
> > > http://hbase.apache.org/0.94/book/ops_mgt.html#export
> > >
> > >
> > > for the second point which I forget to mention in mail, I did copy
> > contents
> > > of /data/ExportedFiles
> > > in destination cluster which is having HBase 1.2.1 but not with
> > > distcp instead of I used scp command
> > > and when I am trying to import data I am getting below error
> > >
> > > 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> > > attempt_1505781444745_0070_m_000003_0, Status : FAILED
> > > Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read
> > 121347
> > >         at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(
> > > SequenceFile.java:2306)
> > >         at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordRead
> > > er.nextKeyValue(SequenceFileRecordReader.java:78)
> > >         at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nex
> > > tKeyValue(MapTask.java:556)
> > >         at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue
> > > (MapContextImpl.java:80)
> > >         at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.ne
> > > xtKeyValue(WrappedMapper.java:91)
> > >         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> > >         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.
> > java:787)
> > >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> > >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> > >         at java.security.AccessController.doPrivileged(Native Method)
> > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > >         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
> > > upInformation.java:1693)
> > >         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> > >
> > >
> > >
> > > can you please elaborate more about  "Is the environment ready for the
> > > upgrade?"
> > >
> > > Thanks
> > > Manjeet Singh
> > >
> > >
> > >
> > > On Thu, Oct 26, 2017 at 8:32 AM, Yung-An He <ma...@gmail.com>
> wrote:
> > >
> > >> Hi,
> > >>
> > >> Have you seen the reference guide
> > >> <http://hbase.apache.org/book.html#_upgrade_paths> to make sure that
> > the
> > >> environment is ready for the upgrade?
> > >> Perhaps you could try to copy the contents of /data/ExportedFiles to
> the
> > >> HBase 1.2.1 cluster using distcp before import data instead of using
> > >> "hdfs://<IP>:8020/data/ExportedFiles" directly.
> > >> Then create the table on the HBase 1.2.1 cluster using HBase Shell.
> > Column
> > >> families must be identical to the table on the old one.
> > >> Finally, import data from /data/ExportedFiles on the HBase 1.2.1
> > cluster.
> > >>
> > >>
> > >> Best Regards.
> > >>
> > >> 2017-10-24 1:27 GMT+08:00 Manjeet Singh <ma...@gmail.com>:
> > >>
> > >> > Hi All,
> > >> >
> > >> > Can anyone help?
> > >> >
> > >> > adding few more investigations I have move all files to the
> > destination
> > >> > cluster hdfs and I have run below command:-
> > >> >
> > >> > sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import
> > test_table
> > >> > hdfs://<IP>:8020/data/ExportedFiles
> > >> >
> > >> > I am getting below error
> > >> >
> > >> > 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> > >> > attempt_1505781444745_0070_m_000003_0, Status : FAILED
> > >> > Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read
> > >> 121347
> > >> >         at
> > >> > org.apache.hadoop.io.SequenceFile$Reader.
> > getCurrentValue(SequenceFile.
> > >> > java:2306)
> > >> >         at
> > >> > org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.
> > >> > nextKeyValue(SequenceFileRecordReader.java:78)
> > >> >         at
> > >> > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
> > >> > nextKeyValue(MapTask.java:556)
> > >> >         at
> > >> > org.apache.hadoop.mapreduce.task.MapContextImpl.
> > >> > nextKeyValue(MapContextImpl.java:80)
> > >> >         at
> > >> > org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
> > >> > nextKeyValue(WrappedMapper.java:91)
> > >> >         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> > >> >         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.
> > java:
> > >> 787)
> > >> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> > >> >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.
> > java:164)
> > >> >         at java.security.AccessController.doPrivileged(Native
> Method)
> > >> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > >> >         at
> > >> > org.apache.hadoop.security.UserGroupInformation.doAs(
> > >> > UserGroupInformation.java:1693)
> > >> >         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:
> > 158)
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > can anyone suggest how to migrate data?
> > >> >
> > >> > Thanks
> > >> > Manjeet Singh
> > >> >
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > Hi All,
> > >> >
> > >> > I have query regarding hbase data migration from one cluster to
> > another
> > >> > cluster in same N/W, but with a different version of hbase one is
> > >> 0.94.27
> > >> > (source cluster hbase) and another is destination cluster hbase
> > version
> > >> is
> > >> > 1.2.1.
> > >> >
> > >> > I have used below command to take backup of hbase table on source
> > >> cluster
> > >> > is:
> > >> >  ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
> > >> > /data/backupData/
> > >> >
> > >> > below files were genrated by using above command:-
> > >> >
> > >> >
> > >> > drwxr-xr-x 3 root root        4096 Dec  9  2016 _logs
> > >> > -rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-00000
> > >> > -rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-00001
> > >> > -rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-00002
> > >> > -rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-00003
> > >> > -rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-00004
> > >> > -rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-00005
> > >> > -rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-00006
> > >> > -rw-r--r-- 1 root root           0 Dec 16  2016 _SUCCESS
> > >> >
> > >> >
> > >> > in order to restore these files I am assuming I have to move these
> > >> files in
> > >> > destination cluster and have to run below command
> > >> >
> > >> > hbase org.apache.hadoop.hbase.mapreduce.Import <tablename>
> > >> > /data/backupData/
> > >> >
> > >> > Please suggest if I am on correct direction, second if anyone have
> > >> another
> > >> > option.
> > >> > I have tryed this with test data but above command took very long
> time
> > >> and
> > >> > at end it gets fails
> > >> >
> > >> > 17/10/23 11:54:21 INFO mapred.JobClient:  map 0% reduce 0%
> > >> > 17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
> > >> > attempt_201710131340_0355_m_000002_0, Status : FAILED
> > >> > Task attempt_201710131340_0355_m_000002_0 failed to report status
> for
> > >> 600
> > >> > seconds. Killing!
> > >> >
> > >> >
> > >> > Thanks
> > >> > Manjeet Singh
> > >> >
> > >> >
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > --
> > >> > luv all
> > >> >
> > >>
> > >
> > >
> > >
> > > --
> > > luv all
> > >
> >
> >
> >
> > --
> > luv all
> >
>
-- 


-- Enrico Olivelli

Re: hbase data migration from one cluster to another cluster on different versions

Posted by Yung-An He <ma...@gmail.com>.
Hi Manjeet,

I am sorry that I misunderstood your question.

The hbase book http://hbase.apache.org/book.html#_upgrade_paths describes:
"You must stop your cluster, install the 1.x.x software, run the migration
described at
Executing the 0.96 Upgrade
<http://hbase.apache.org/book.html#executing.the.0.96.upgrade>
(substituting 1.x.x. wherever we make mention of 0.96.x in the section
below),
and then restart. Be sure to upgrade your ZooKeeper if it is a version less
than the required 3.4.x."

This is what I mean about "the environment ready for the upgrade".
Since the HBase 1.2.1 cluster is the all new cluster, this is the different
situation from yours.

If the contents of /data/ExportedFiles have been put to the HDFS on HBase
1.2.1 cluster,
try the below command:
sudo -u hdfs hbase -Dhbase.import.version=0.94
org.apache.hadoop.hbase.mapreduce.Import test_table /data/ExportedFiles
instead of yours.

Best Regards.


2017-10-26 13:27 GMT+08:00 Manjeet Singh <ma...@gmail.com>:

> Furthermore, clarity why I used scp command is :
>
> I have copy source cluster files to destination cluster by using scp
> command and put them into destination cluster HDFS (It's because of two
> different version of Haddop  destination cluster hadoop is 1.2.1 and
> destination is having Hadoop 2.0 ) First I get HDFS files to local linux
> and use scp command to put them into destination cluster.
>
> Thanks
> Manjeet Singh
>
> On Thu, Oct 26, 2017 at 10:26 AM, Manjeet Singh <
> manjeet.chandhok@gmail.com>
> wrote:
>
> > Hi Yung,
> >
> > First thanks for reply
> > The link provided by you is for upgrading the Hbase version and problem
> > statement is different
> > Problem is when I am trying to export hbase data from one cluster to
> > another cluster in same N/W, but with a different hbase version  i.e.
> > 0.94.27 (source cluster hbase) and another is destination cluster hbase
> > version is 1.2.1.
> > So this link shall be refer
> > http://hbase.apache.org/0.94/book/ops_mgt.html#export
> >
> >
> > for the second point which I forget to mention in mail, I did copy
> contents
> > of /data/ExportedFiles
> > in destination cluster which is having HBase 1.2.1 but not with
> > distcp instead of I used scp command
> > and when I am trying to import data I am getting below error
> >
> > 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> > attempt_1505781444745_0070_m_000003_0, Status : FAILED
> > Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read
> 121347
> >         at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(
> > SequenceFile.java:2306)
> >         at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordRead
> > er.nextKeyValue(SequenceFileRecordReader.java:78)
> >         at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nex
> > tKeyValue(MapTask.java:556)
> >         at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue
> > (MapContextImpl.java:80)
> >         at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.ne
> > xtKeyValue(WrappedMapper.java:91)
> >         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> >         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.
> java:787)
> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
> > upInformation.java:1693)
> >         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> >
> >
> >
> > can you please elaborate more about  "Is the environment ready for the
> > upgrade?"
> >
> > Thanks
> > Manjeet Singh
> >
> >
> >
> > On Thu, Oct 26, 2017 at 8:32 AM, Yung-An He <ma...@gmail.com> wrote:
> >
> >> Hi,
> >>
> >> Have you seen the reference guide
> >> <http://hbase.apache.org/book.html#_upgrade_paths> to make sure that
> the
> >> environment is ready for the upgrade?
> >> Perhaps you could try to copy the contents of /data/ExportedFiles to the
> >> HBase 1.2.1 cluster using distcp before import data instead of using
> >> "hdfs://<IP>:8020/data/ExportedFiles" directly.
> >> Then create the table on the HBase 1.2.1 cluster using HBase Shell.
> Column
> >> families must be identical to the table on the old one.
> >> Finally, import data from /data/ExportedFiles on the HBase 1.2.1
> cluster.
> >>
> >>
> >> Best Regards.
> >>
> >> 2017-10-24 1:27 GMT+08:00 Manjeet Singh <ma...@gmail.com>:
> >>
> >> > Hi All,
> >> >
> >> > Can anyone help?
> >> >
> >> > adding few more investigations I have move all files to the
> destination
> >> > cluster hdfs and I have run below command:-
> >> >
> >> > sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import
> test_table
> >> > hdfs://<IP>:8020/data/ExportedFiles
> >> >
> >> > I am getting below error
> >> >
> >> > 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> >> > attempt_1505781444745_0070_m_000003_0, Status : FAILED
> >> > Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read
> >> 121347
> >> >         at
> >> > org.apache.hadoop.io.SequenceFile$Reader.
> getCurrentValue(SequenceFile.
> >> > java:2306)
> >> >         at
> >> > org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.
> >> > nextKeyValue(SequenceFileRecordReader.java:78)
> >> >         at
> >> > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
> >> > nextKeyValue(MapTask.java:556)
> >> >         at
> >> > org.apache.hadoop.mapreduce.task.MapContextImpl.
> >> > nextKeyValue(MapContextImpl.java:80)
> >> >         at
> >> > org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
> >> > nextKeyValue(WrappedMapper.java:91)
> >> >         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> >> >         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.
> java:
> >> 787)
> >> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> >> >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.
> java:164)
> >> >         at java.security.AccessController.doPrivileged(Native Method)
> >> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >> >         at
> >> > org.apache.hadoop.security.UserGroupInformation.doAs(
> >> > UserGroupInformation.java:1693)
> >> >         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:
> 158)
> >> >
> >> >
> >> >
> >> >
> >> > can anyone suggest how to migrate data?
> >> >
> >> > Thanks
> >> > Manjeet Singh
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > Hi All,
> >> >
> >> > I have query regarding hbase data migration from one cluster to
> another
> >> > cluster in same N/W, but with a different version of hbase one is
> >> 0.94.27
> >> > (source cluster hbase) and another is destination cluster hbase
> version
> >> is
> >> > 1.2.1.
> >> >
> >> > I have used below command to take backup of hbase table on source
> >> cluster
> >> > is:
> >> >  ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
> >> > /data/backupData/
> >> >
> >> > below files were genrated by using above command:-
> >> >
> >> >
> >> > drwxr-xr-x 3 root root        4096 Dec  9  2016 _logs
> >> > -rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-00000
> >> > -rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-00001
> >> > -rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-00002
> >> > -rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-00003
> >> > -rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-00004
> >> > -rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-00005
> >> > -rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-00006
> >> > -rw-r--r-- 1 root root           0 Dec 16  2016 _SUCCESS
> >> >
> >> >
> >> > in order to restore these files I am assuming I have to move these
> >> files in
> >> > destination cluster and have to run below command
> >> >
> >> > hbase org.apache.hadoop.hbase.mapreduce.Import <tablename>
> >> > /data/backupData/
> >> >
> >> > Please suggest if I am on correct direction, second if anyone have
> >> another
> >> > option.
> >> > I have tryed this with test data but above command took very long time
> >> and
> >> > at end it gets fails
> >> >
> >> > 17/10/23 11:54:21 INFO mapred.JobClient:  map 0% reduce 0%
> >> > 17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
> >> > attempt_201710131340_0355_m_000002_0, Status : FAILED
> >> > Task attempt_201710131340_0355_m_000002_0 failed to report status for
> >> 600
> >> > seconds. Killing!
> >> >
> >> >
> >> > Thanks
> >> > Manjeet Singh
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > --
> >> > luv all
> >> >
> >>
> >
> >
> >
> > --
> > luv all
> >
>
>
>
> --
> luv all
>

Re: hbase data migration from one cluster to another cluster on different versions

Posted by Manjeet Singh <ma...@gmail.com>.
Furthermore, clarity why I used scp command is :

I have copy source cluster files to destination cluster by using scp
command and put them into destination cluster HDFS (It's because of two
different version of Haddop  destination cluster hadoop is 1.2.1 and
destination is having Hadoop 2.0 ) First I get HDFS files to local linux
and use scp command to put them into destination cluster.

Thanks
Manjeet Singh

On Thu, Oct 26, 2017 at 10:26 AM, Manjeet Singh <ma...@gmail.com>
wrote:

> Hi Yung,
>
> First thanks for reply
> The link provided by you is for upgrading the Hbase version and problem
> statement is different
> Problem is when I am trying to export hbase data from one cluster to
> another cluster in same N/W, but with a different hbase version  i.e.
> 0.94.27 (source cluster hbase) and another is destination cluster hbase
> version is 1.2.1.
> So this link shall be refer
> http://hbase.apache.org/0.94/book/ops_mgt.html#export
>
>
> for the second point which I forget to mention in mail, I did copy contents
> of /data/ExportedFiles
> in destination cluster which is having HBase 1.2.1 but not with
> distcp instead of I used scp command
> and when I am trying to import data I am getting below error
>
> 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> attempt_1505781444745_0070_m_000003_0, Status : FAILED
> Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read 121347
>         at org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(
> SequenceFile.java:2306)
>         at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordRead
> er.nextKeyValue(SequenceFileRecordReader.java:78)
>         at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nex
> tKeyValue(MapTask.java:556)
>         at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue
> (MapContextImpl.java:80)
>         at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.ne
> xtKeyValue(WrappedMapper.java:91)
>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGro
> upInformation.java:1693)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>
>
>
> can you please elaborate more about  "Is the environment ready for the
> upgrade?"
>
> Thanks
> Manjeet Singh
>
>
>
> On Thu, Oct 26, 2017 at 8:32 AM, Yung-An He <ma...@gmail.com> wrote:
>
>> Hi,
>>
>> Have you seen the reference guide
>> <http://hbase.apache.org/book.html#_upgrade_paths> to make sure that the
>> environment is ready for the upgrade?
>> Perhaps you could try to copy the contents of /data/ExportedFiles to the
>> HBase 1.2.1 cluster using distcp before import data instead of using
>> "hdfs://<IP>:8020/data/ExportedFiles" directly.
>> Then create the table on the HBase 1.2.1 cluster using HBase Shell. Column
>> families must be identical to the table on the old one.
>> Finally, import data from /data/ExportedFiles on the HBase 1.2.1 cluster.
>>
>>
>> Best Regards.
>>
>> 2017-10-24 1:27 GMT+08:00 Manjeet Singh <ma...@gmail.com>:
>>
>> > Hi All,
>> >
>> > Can anyone help?
>> >
>> > adding few more investigations I have move all files to the destination
>> > cluster hdfs and I have run below command:-
>> >
>> > sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import test_table
>> > hdfs://<IP>:8020/data/ExportedFiles
>> >
>> > I am getting below error
>> >
>> > 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
>> > attempt_1505781444745_0070_m_000003_0, Status : FAILED
>> > Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read
>> 121347
>> >         at
>> > org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.
>> > java:2306)
>> >         at
>> > org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.
>> > nextKeyValue(SequenceFileRecordReader.java:78)
>> >         at
>> > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
>> > nextKeyValue(MapTask.java:556)
>> >         at
>> > org.apache.hadoop.mapreduce.task.MapContextImpl.
>> > nextKeyValue(MapContextImpl.java:80)
>> >         at
>> > org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
>> > nextKeyValue(WrappedMapper.java:91)
>> >         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>> >         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:
>> 787)
>> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>> >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>> >         at java.security.AccessController.doPrivileged(Native Method)
>> >         at javax.security.auth.Subject.doAs(Subject.java:422)
>> >         at
>> > org.apache.hadoop.security.UserGroupInformation.doAs(
>> > UserGroupInformation.java:1693)
>> >         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>> >
>> >
>> >
>> >
>> > can anyone suggest how to migrate data?
>> >
>> > Thanks
>> > Manjeet Singh
>> >
>> >
>> >
>> >
>> >
>> > Hi All,
>> >
>> > I have query regarding hbase data migration from one cluster to another
>> > cluster in same N/W, but with a different version of hbase one is
>> 0.94.27
>> > (source cluster hbase) and another is destination cluster hbase version
>> is
>> > 1.2.1.
>> >
>> > I have used below command to take backup of hbase table on source
>> cluster
>> > is:
>> >  ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
>> > /data/backupData/
>> >
>> > below files were genrated by using above command:-
>> >
>> >
>> > drwxr-xr-x 3 root root        4096 Dec  9  2016 _logs
>> > -rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-00000
>> > -rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-00001
>> > -rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-00002
>> > -rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-00003
>> > -rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-00004
>> > -rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-00005
>> > -rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-00006
>> > -rw-r--r-- 1 root root           0 Dec 16  2016 _SUCCESS
>> >
>> >
>> > in order to restore these files I am assuming I have to move these
>> files in
>> > destination cluster and have to run below command
>> >
>> > hbase org.apache.hadoop.hbase.mapreduce.Import <tablename>
>> > /data/backupData/
>> >
>> > Please suggest if I am on correct direction, second if anyone have
>> another
>> > option.
>> > I have tryed this with test data but above command took very long time
>> and
>> > at end it gets fails
>> >
>> > 17/10/23 11:54:21 INFO mapred.JobClient:  map 0% reduce 0%
>> > 17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
>> > attempt_201710131340_0355_m_000002_0, Status : FAILED
>> > Task attempt_201710131340_0355_m_000002_0 failed to report status for
>> 600
>> > seconds. Killing!
>> >
>> >
>> > Thanks
>> > Manjeet Singh
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > luv all
>> >
>>
>
>
>
> --
> luv all
>



-- 
luv all

Re: hbase data migration from one cluster to another cluster on different versions

Posted by Manjeet Singh <ma...@gmail.com>.
Hi Yung,

First thanks for reply
The link provided by you is for upgrading the Hbase version and problem
statement is different
Problem is when I am trying to export hbase data from one cluster to
another cluster in same N/W, but with a different hbase version  i.e.
0.94.27 (source cluster hbase) and another is destination cluster hbase
version is 1.2.1.
So this link shall be refer
http://hbase.apache.org/0.94/book/ops_mgt.html#export


for the second point which I forget to mention in mail, I did copy contents
of /data/ExportedFiles
in destination cluster which is having HBase 1.2.1 but not with
distcp instead of I used scp command
and when I am trying to import data I am getting below error

17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
attempt_1505781444745_0070_m_000003_0, Status : FAILED
Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read 121347
        at org.apache.hadoop.io.SequenceFile$Reader.
getCurrentValue(SequenceFile.java:2306)
        at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.
nextKeyValue(SequenceFileRecordReader.java:78)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
nextKeyValue(MapTask.java:556)
        at org.apache.hadoop.mapreduce.task.MapContextImpl.
nextKeyValue(MapContextImpl.java:80)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(
UserGroupInformation.java:1693)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)



can you please elaborate more about  "Is the environment ready for the
upgrade?"

Thanks
Manjeet Singh



On Thu, Oct 26, 2017 at 8:32 AM, Yung-An He <ma...@gmail.com> wrote:

> Hi,
>
> Have you seen the reference guide
> <http://hbase.apache.org/book.html#_upgrade_paths> to make sure that the
> environment is ready for the upgrade?
> Perhaps you could try to copy the contents of /data/ExportedFiles to the
> HBase 1.2.1 cluster using distcp before import data instead of using
> "hdfs://<IP>:8020/data/ExportedFiles" directly.
> Then create the table on the HBase 1.2.1 cluster using HBase Shell. Column
> families must be identical to the table on the old one.
> Finally, import data from /data/ExportedFiles on the HBase 1.2.1 cluster.
>
>
> Best Regards.
>
> 2017-10-24 1:27 GMT+08:00 Manjeet Singh <ma...@gmail.com>:
>
> > Hi All,
> >
> > Can anyone help?
> >
> > adding few more investigations I have move all files to the destination
> > cluster hdfs and I have run below command:-
> >
> > sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import test_table
> > hdfs://<IP>:8020/data/ExportedFiles
> >
> > I am getting below error
> >
> > 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> > attempt_1505781444745_0070_m_000003_0, Status : FAILED
> > Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read
> 121347
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.
> > java:2306)
> >         at
> > org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.
> > nextKeyValue(SequenceFileRecordReader.java:78)
> >         at
> > org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
> > nextKeyValue(MapTask.java:556)
> >         at
> > org.apache.hadoop.mapreduce.task.MapContextImpl.
> > nextKeyValue(MapContextImpl.java:80)
> >         at
> > org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
> > nextKeyValue(WrappedMapper.java:91)
> >         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
> >         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.
> java:787)
> >         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> >         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> > org.apache.hadoop.security.UserGroupInformation.doAs(
> > UserGroupInformation.java:1693)
> >         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> >
> >
> >
> >
> > can anyone suggest how to migrate data?
> >
> > Thanks
> > Manjeet Singh
> >
> >
> >
> >
> >
> > Hi All,
> >
> > I have query regarding hbase data migration from one cluster to another
> > cluster in same N/W, but with a different version of hbase one is 0.94.27
> > (source cluster hbase) and another is destination cluster hbase version
> is
> > 1.2.1.
> >
> > I have used below command to take backup of hbase table on source cluster
> > is:
> >  ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
> > /data/backupData/
> >
> > below files were genrated by using above command:-
> >
> >
> > drwxr-xr-x 3 root root        4096 Dec  9  2016 _logs
> > -rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-00000
> > -rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-00001
> > -rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-00002
> > -rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-00003
> > -rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-00004
> > -rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-00005
> > -rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-00006
> > -rw-r--r-- 1 root root           0 Dec 16  2016 _SUCCESS
> >
> >
> > in order to restore these files I am assuming I have to move these files
> in
> > destination cluster and have to run below command
> >
> > hbase org.apache.hadoop.hbase.mapreduce.Import <tablename>
> > /data/backupData/
> >
> > Please suggest if I am on correct direction, second if anyone have
> another
> > option.
> > I have tryed this with test data but above command took very long time
> and
> > at end it gets fails
> >
> > 17/10/23 11:54:21 INFO mapred.JobClient:  map 0% reduce 0%
> > 17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
> > attempt_201710131340_0355_m_000002_0, Status : FAILED
> > Task attempt_201710131340_0355_m_000002_0 failed to report status for
> 600
> > seconds. Killing!
> >
> >
> > Thanks
> > Manjeet Singh
> >
> >
> >
> >
> >
> >
> > --
> > luv all
> >
>



-- 
luv all

Re: hbase data migration from one cluster to another cluster on different versions

Posted by Yung-An He <ma...@gmail.com>.
Hi,

Have you seen the reference guide
<http://hbase.apache.org/book.html#_upgrade_paths> to make sure that the
environment is ready for the upgrade?
Perhaps you could try to copy the contents of /data/ExportedFiles to the
HBase 1.2.1 cluster using distcp before import data instead of using
"hdfs://<IP>:8020/data/ExportedFiles" directly.
Then create the table on the HBase 1.2.1 cluster using HBase Shell. Column
families must be identical to the table on the old one.
Finally, import data from /data/ExportedFiles on the HBase 1.2.1 cluster.


Best Regards.

2017-10-24 1:27 GMT+08:00 Manjeet Singh <ma...@gmail.com>:

> Hi All,
>
> Can anyone help?
>
> adding few more investigations I have move all files to the destination
> cluster hdfs and I have run below command:-
>
> sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import test_table
> hdfs://<IP>:8020/data/ExportedFiles
>
> I am getting below error
>
> 17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
> attempt_1505781444745_0070_m_000003_0, Status : FAILED
> Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read 121347
>         at
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.
> java:2306)
>         at
> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.
> nextKeyValue(SequenceFileRecordReader.java:78)
>         at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.
> nextKeyValue(MapTask.java:556)
>         at
> org.apache.hadoop.mapreduce.task.MapContextImpl.
> nextKeyValue(MapContextImpl.java:80)
>         at
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.
> nextKeyValue(WrappedMapper.java:91)
>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>         at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1693)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
>
>
>
>
> can anyone suggest how to migrate data?
>
> Thanks
> Manjeet Singh
>
>
>
>
>
> Hi All,
>
> I have query regarding hbase data migration from one cluster to another
> cluster in same N/W, but with a different version of hbase one is 0.94.27
> (source cluster hbase) and another is destination cluster hbase version is
> 1.2.1.
>
> I have used below command to take backup of hbase table on source cluster
> is:
>  ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
> /data/backupData/
>
> below files were genrated by using above command:-
>
>
> drwxr-xr-x 3 root root        4096 Dec  9  2016 _logs
> -rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-00000
> -rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-00001
> -rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-00002
> -rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-00003
> -rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-00004
> -rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-00005
> -rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-00006
> -rw-r--r-- 1 root root           0 Dec 16  2016 _SUCCESS
>
>
> in order to restore these files I am assuming I have to move these files in
> destination cluster and have to run below command
>
> hbase org.apache.hadoop.hbase.mapreduce.Import <tablename>
> /data/backupData/
>
> Please suggest if I am on correct direction, second if anyone have another
> option.
> I have tryed this with test data but above command took very long time and
> at end it gets fails
>
> 17/10/23 11:54:21 INFO mapred.JobClient:  map 0% reduce 0%
> 17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
> attempt_201710131340_0355_m_000002_0, Status : FAILED
> Task attempt_201710131340_0355_m_000002_0 failed to report status for 600
> seconds. Killing!
>
>
> Thanks
> Manjeet Singh
>
>
>
>
>
>
> --
> luv all
>

Fwd: hbase data migration from one cluster to another cluster on different versions

Posted by Manjeet Singh <ma...@gmail.com>.
Hi All,

Can anyone help?

adding few more investigations I have move all files to the destination
cluster hdfs and I have run below command:-

sudo -u hdfs hbase org.apache.hadoop.hbase.mapreduce.Import test_table
hdfs://<IP>:8020/data/ExportedFiles

I am getting below error

17/10/23 16:13:50 INFO mapreduce.Job: Task Id :
attempt_1505781444745_0070_m_000003_0, Status : FAILED
Error: java.io.IOException: keyvalues=NONE read 2 bytes, should read 121347
        at
org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2306)
        at
org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:78)
        at
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
        at
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
        at
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)




can anyone suggest how to migrate data?

Thanks
Manjeet Singh





Hi All,

I have query regarding hbase data migration from one cluster to another
cluster in same N/W, but with a different version of hbase one is 0.94.27
(source cluster hbase) and another is destination cluster hbase version is
1.2.1.

I have used below command to take backup of hbase table on source cluster
is:
 ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
/data/backupData/

below files were genrated by using above command:-


drwxr-xr-x 3 root root        4096 Dec  9  2016 _logs
-rw-r--r-- 1 root root   788227695 Dec 16  2016 part-m-00000
-rw-r--r-- 1 root root  1098757026 Dec 16  2016 part-m-00001
-rw-r--r-- 1 root root   906973626 Dec 16  2016 part-m-00002
-rw-r--r-- 1 root root  1981769314 Dec 16  2016 part-m-00003
-rw-r--r-- 1 root root  2099785782 Dec 16  2016 part-m-00004
-rw-r--r-- 1 root root  4118835540 Dec 16  2016 part-m-00005
-rw-r--r-- 1 root root 14217981341 Dec 16  2016 part-m-00006
-rw-r--r-- 1 root root           0 Dec 16  2016 _SUCCESS


in order to restore these files I am assuming I have to move these files in
destination cluster and have to run below command

hbase org.apache.hadoop.hbase.mapreduce.Import <tablename> /data/backupData/

Please suggest if I am on correct direction, second if anyone have another
option.
I have tryed this with test data but above command took very long time and
at end it gets fails

17/10/23 11:54:21 INFO mapred.JobClient:  map 0% reduce 0%
17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
attempt_201710131340_0355_m_000002_0, Status : FAILED
Task attempt_201710131340_0355_m_000002_0 failed to report status for 600
seconds. Killing!


Thanks
Manjeet Singh






-- 
luv all