You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Krishnasamy Rajan <yu...@gmail.com> on 2015/12/22 03:34:21 UTC

Backup and Recovery for disaster recovery

Hi, 

We’re using HBase under phoenix. Need to setup DR site and ongoing replication. 
Phoenix tables are salted tables. In this scenario what is the best method to copy data to remote cluster? 
People give different opinions.  Replication will not work for us as we’re using bulk loading. 

Can you advise what are our options to copy data to remote cluster and keeping it up to date. 
Thanks for your inputs. 

-Regards
Krishna 

Re: Backup and Recovery for disaster recovery

Posted by Ankit Singhal <an...@gmail.com>.
+1 for taking snapshots and exporting them on DR cluster if there is no
requirement of DR cluster to stay up-to-date in realtime.
I am not sure if there is any Incremental snaphost feature out yet but
doing snapshot on periodic basis is also not that heavy.

On Thu, Dec 24, 2015 at 3:44 PM, Sandeep Nemuri <nh...@gmail.com>
wrote:

> You can take incremental Hbase Snapshots for required tables and store it
> in the DR cluster.
> Restoring doesn't take much time in this case.
>
> Thanks
> Sandeep Nemuri
>
> ᐧ
>
> On Thu, Dec 24, 2015 at 11:49 AM, Vasudevan, Ramkrishna S <
> ramkrishna.s.vasudevan@intel.com> wrote:
>
>> Am not very sure if Phoenix directly has any replication support now. But
>> in your case as you are bulk loading the tables you are not able to
>> replicate but that problems is addressed in HBase
>> As part of
>> https://issues.apache.org/jira/browse/HBASE-13153
>> Where bulk loaded files can get replicated directly to the remote cluster
>> like how WAL edits gets replicated.
>>
>> Regards
>> Ram
>>
>> -----Original Message-----
>> From: Krishnasamy Rajan [mailto:yume.krishna@gmail.com]
>> Sent: Tuesday, December 22, 2015 8:04 AM
>> To: user@phoenix.apache.org
>> Subject: Backup and Recovery for disaster recovery
>>
>> Hi,
>>
>> We’re using HBase under phoenix. Need to setup DR site and ongoing
>> replication.
>> Phoenix tables are salted tables. In this scenario what is the best
>> method to copy data to remote cluster?
>> People give different opinions.  Replication will not work for us as
>> we’re using bulk loading.
>>
>> Can you advise what are our options to copy data to remote cluster and
>> keeping it up to date.
>> Thanks for your inputs.
>>
>> -Regards
>> Krishna
>>
>
>
>
> --
> *  Regards*
> *  Sandeep Nemuri*
>

Re: Backup and Recovery for disaster recovery

Posted by Sandeep Nemuri <nh...@gmail.com>.
You can take incremental Hbase Snapshots for required tables and store it
in the DR cluster.
Restoring doesn't take much time in this case.

Thanks
Sandeep Nemuri

ᐧ

On Thu, Dec 24, 2015 at 11:49 AM, Vasudevan, Ramkrishna S <
ramkrishna.s.vasudevan@intel.com> wrote:

> Am not very sure if Phoenix directly has any replication support now. But
> in your case as you are bulk loading the tables you are not able to
> replicate but that problems is addressed in HBase
> As part of
> https://issues.apache.org/jira/browse/HBASE-13153
> Where bulk loaded files can get replicated directly to the remote cluster
> like how WAL edits gets replicated.
>
> Regards
> Ram
>
> -----Original Message-----
> From: Krishnasamy Rajan [mailto:yume.krishna@gmail.com]
> Sent: Tuesday, December 22, 2015 8:04 AM
> To: user@phoenix.apache.org
> Subject: Backup and Recovery for disaster recovery
>
> Hi,
>
> We’re using HBase under phoenix. Need to setup DR site and ongoing
> replication.
> Phoenix tables are salted tables. In this scenario what is the best method
> to copy data to remote cluster?
> People give different opinions.  Replication will not work for us as we’re
> using bulk loading.
>
> Can you advise what are our options to copy data to remote cluster and
> keeping it up to date.
> Thanks for your inputs.
>
> -Regards
> Krishna
>



-- 
*  Regards*
*  Sandeep Nemuri*

RE: Backup and Recovery for disaster recovery

Posted by "Vasudevan, Ramkrishna S" <ra...@intel.com>.
Am not very sure if Phoenix directly has any replication support now. But in your case as you are bulk loading the tables you are not able to replicate but that problems is addressed in HBase
As part of 
https://issues.apache.org/jira/browse/HBASE-13153
Where bulk loaded files can get replicated directly to the remote cluster like how WAL edits gets replicated.

Regards
Ram

-----Original Message-----
From: Krishnasamy Rajan [mailto:yume.krishna@gmail.com] 
Sent: Tuesday, December 22, 2015 8:04 AM
To: user@phoenix.apache.org
Subject: Backup and Recovery for disaster recovery 

Hi, 

We’re using HBase under phoenix. Need to setup DR site and ongoing replication. 
Phoenix tables are salted tables. In this scenario what is the best method to copy data to remote cluster? 
People give different opinions.  Replication will not work for us as we’re using bulk loading. 

Can you advise what are our options to copy data to remote cluster and keeping it up to date. 
Thanks for your inputs. 

-Regards
Krishna 

Re: Backup and Recovery for disaster recovery

Posted by Krishna <re...@gmail.com>.
You have two options:
- save the hfiles created during bulk load & import on DR cluster
- after bulk-loading, export table on the live cluster & import to DR
cluster

On Monday, December 21, 2015, Krishnasamy Rajan <yu...@gmail.com>
wrote:

> Hi,
>
> We’re using HBase under phoenix. Need to setup DR site and ongoing
> replication.
> Phoenix tables are salted tables. In this scenario what is the best method
> to copy data to remote cluster?
> People give different opinions.  Replication will not work for us as we’re
> using bulk loading.
>
> Can you advise what are our options to copy data to remote cluster and
> keeping it up to date.
> Thanks for your inputs.
>
> -Regards
> Krishna