You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Jean-Marc Spaggiari (JIRA)" <ji...@apache.org> on 2017/04/22 00:21:04 UTC
[jira] [Commented] (HBASE-15320) HBase connector for Kafka Connect
[ https://issues.apache.org/jira/browse/HBASE-15320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15979638#comment-15979638 ]
Jean-Marc Spaggiari commented on HBASE-15320:
---------------------------------------------
@stack can you please get this JIRA assigned to Mike?
Thanks.
> HBase connector for Kafka Connect
> ---------------------------------
>
> Key: HBASE-15320
> URL: https://issues.apache.org/jira/browse/HBASE-15320
> Project: HBase
> Issue Type: New Feature
> Components: Replication
> Reporter: Andrew Purtell
> Labels: beginner
> Fix For: 2.0.0
>
>
> Implement an HBase connector with source and sink tasks for the Connect framework (http://docs.confluent.io/2.0.0/connect/index.html) available in Kafka 0.9 and later.
> See also: http://www.confluent.io/blog/announcing-kafka-connect-building-large-scale-low-latency-data-pipelines
> An HBase source (http://docs.confluent.io/2.0.0/connect/devguide.html#task-example-source-task) could be implemented as a replication endpoint or WALObserver, publishing cluster wide change streams from the WAL to one or more topics, with configurable mapping and partitioning of table changes to topics.
> An HBase sink task (http://docs.confluent.io/2.0.0/connect/devguide.html#sink-tasks) would persist, with optional transformation (JSON? Avro?, map fields to native schema?), Kafka SinkRecords into HBase tables.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)