You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Viraj Jasani (Jira)" <ji...@apache.org> on 2023/02/10 00:50:00 UTC
[jira] [Assigned] (PHOENIX-5521) Phoenix-level HBase Replication sink (Endpoint coproc)
[ https://issues.apache.org/jira/browse/PHOENIX-5521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Viraj Jasani reassigned PHOENIX-5521:
-------------------------------------
Assignee: Viraj Jasani
> Phoenix-level HBase Replication sink (Endpoint coproc)
> ------------------------------------------------------
>
> Key: PHOENIX-5521
> URL: https://issues.apache.org/jira/browse/PHOENIX-5521
> Project: Phoenix
> Issue Type: Sub-task
> Reporter: Geoffrey Jacoby
> Assignee: Viraj Jasani
> Priority: Major
>
> An HBase coprocessor Endpoint hook that takes in a request from a remote cluster (containing both the WALEdit's data and the WALKey's annotated metadata telling the remote cluster what tenant_id, logical tablename, and timestamp the data is associated with).
> Ideally the API's message format should be configurable / pluggable, and could be either a protobuf or an Avro schema similar to the WALEdit-like one described by PHOENIX-5443. Endpoints in HBase are structured to work with protobufs, so some conversion may be necessary in an Avro-compatible version. Future work may also extend this to any conforming schema given by a schema service such as the one in PHOENIX-5443, which would be useful in allowing PHOENIX-5442's CDC service to be used as a backup / migration tool.
> The endpoint hook would take the metadata + data and regenerate a complete set of Phoenix mutations, both data and indexes, just as the phoenix client did for the original SQL statement that generated the source-side edits. These mutations would be written to the remote cluster by the normal Phoenix write path.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)