You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@phoenix.apache.org by "Lars Hofhansl (Jira)" <ji...@apache.org> on 2020/08/11 21:14:00 UTC
[jira] [Commented] (PHOENIX-4286) Create EXPORT SCHEMA command
[ https://issues.apache.org/jira/browse/PHOENIX-4286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17175842#comment-17175842 ]
Lars Hofhansl commented on PHOENIX-4286:
----------------------------------------
What would the syntax look like?
Other databases have {{SHOW CREATE TABLE}}, {{SHOW CREATE VIEW}}, etc.
I assume {{EXPORT}} exports everything.
> Create EXPORT SCHEMA command
> ----------------------------
>
> Key: PHOENIX-4286
> URL: https://issues.apache.org/jira/browse/PHOENIX-4286
> Project: Phoenix
> Issue Type: New Feature
> Reporter: Geoffrey Jacoby
> Assignee: Swaroopa Kadam
> Priority: Major
>
> Phoenix takes in DDL statements and uses it to create metadata in the various SYSTEM tables. There's currently no supported way to go in the opposite direction.
> This is particularly important in migration use cases. If schemas between two clusters are already synchronized, migration of data is _relatively_ straightforward using either Phoenix or HBase's MapReduce integration. Syncing metadata can much more complicated, particularly if only a subset needs to be migrated. For example, an operator migrating a single tenant from one cluster to another would want to also migrate any views or sequences owned by that tenant.
> This can be accomplished by treating SYSTEM tables as data tables and migrating subsets of them but implementations will be relying on brittle low-level implementation details that can and do change.
> Given an EXPORT command, this could be done at a much higher level -- you simply select the DDL statements from the source cluster you need, and then run them on the target cluster.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)