You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by st...@apache.org on 2018/10/31 22:25:28 UTC

[hbase-connectors] branch master updated: Edit on the READMEs to accommodate new kafka proxy.

This is an automated email from the ASF dual-hosted git repository.

stack pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hbase-connectors.git


The following commit(s) were added to refs/heads/master by this push:
     new 1dada61  Edit on the READMEs to accommodate new kafka proxy.
1dada61 is described below

commit 1dada6181fa665f357b814d3aef18f26f3f889e5
Author: Michael Stack <st...@apache.org>
AuthorDate: Wed Oct 31 15:24:57 2018 -0700

    Edit on the READMEs to accommodate new kafka proxy.
---
 README.md       |   2 +
 kafka/README.md | 132 +++++++++++++++++++++++++++++---------------------------
 2 files changed, 71 insertions(+), 63 deletions(-)

diff --git a/README.md b/README.md
index 5e39d6c..85dd8b6 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,5 @@
 # hbase-connectors
 
 Connectors for [Apache HBase&trade;](https://hbase.apache.org) 
+
+  * [Kafka Proxy](./tree/master/kafka)
diff --git a/kafka/README.md b/kafka/README.md
index 7685095..b35573c 100755
--- a/kafka/README.md
+++ b/kafka/README.md
@@ -1,100 +1,104 @@
-Hbase Kafka Proxy
+# Apache HBase&trade; Kafka Proxy
 
-Welcome to the hbase kafka proxy.  The purpose of this proxy is to act as a 'fake peer'.  It
-receives replication events from it's peer and applies a set of rules (stored in
-kafka-route-rules.xml) to determine if the event should be forwared to a topic.  If the
-mutation matches one of the rules, the mutation is converted to an avro object and the
-item is placed into the topic.
+Welcome to the hbase kafka proxy. The purpose of this proxy is to act as a _fake peer_'.
+It receives replication events from a peer cluster and applies a set of rules (stored in
+a _kafka-route-rules.xml_ file) to determine if the event should be forwarded to a 
+kafka topic. If the mutation matches a rule, the mutation is converted to an avro object
+and the item is placed into the topic.
 
-The service sets up a bare bones region server, so it will use the values in hbase-site.xml.  If
-you wish to override those values, pass them as -Dkey=value.
+The service sets up a bare bones RegionServer, so it will use the values in any
+_hbase-site.xml_ it finds on CLASSPATH.  If you wish to override those values,
+pass them as properties on the command line; i.e `-Dkey=value`.
 
-To Use:
+## Usage
 
-1. Make sure the hbase command is in your path.  The proxy uses the 'hbase classpath' command to
-find the hbase libraries.
-
-2. Create any topics in your kafka broker that you wish to use.
-
-3. set up kafka-route-rules.xml.  This file controls how the mutations are routed.  There are
-two kinds of rules: route and drop.  drop: any mutation that matches this rule will be dropped.
-route: any mutation that matches this rule will be routed to the configured topic.
+  # Make sure the hbase command is in your path. The proxy runs `hbase classpath` to find hbase libraries.
+  # Create any topics in your kafka broker that you wish to use.
+  # set up _kafka-route-rules.xml_.  This file controls how the mutations are routed.  There are two kinds of rules: _route_ and _drop_.
+  ## _drop_: any mutation that matches this rule will be dropped.
+  ## _route_: any mutation that matches this rule will be routed to the configured topic.
 
 Each rule has the following parameters:
-- table
-- columnFamily
-- qualifier
+
+  * table
+  * columnFamily
+  * qualifier
 
 The qualifier parameter can contain simple wildcard expressions (start and end only).
 
-Examples
+### Examples
 
+```
 <rules>
 	<rule action="route" table="default:mytable" topic="foo" />
 </rules>
+```
 
-
-This causes all mutations done to default:mytable to be routed to kafka topic 'foo'
+This causes all mutations to `default:mytable` to be routed to the kafka topic `foo`.
 
 
+```
 <rules>
 	<rule action="route" table="default:mytable" columnFamily="mycf" qualifier="myqualifier"
 	topic="mykafkatopic"/>
 </rules>
+```
 
-This will cause all mutations from default:mytable columnFamily mycf and qualifier myqualifier
-to be routed to mykafkatopic.
+This will cause all mutations to `default:mytable` columnFamily `mycf` and qualifier `myqualifier`
+to be routed to `mykafkatopic`.
 
 
+```
 <rules>
 	<rule action="drop" table="default:mytable" columnFamily="mycf" qualifier="secret*"/>
 	<rule action="route" table="default:mytable" columnFamily="mycf" topic="mykafkatopic"/>
 </rules>
+```
 
-This combination will route all mutations from default:mytable columnFamily mycf to mykafkatopic
-unless they start with 'secret'.  Items matching that rule will be dropped.  The way the rule is
-written, all other mutations for column family mycf will be routed to the 'mykafka' topic.
+This combination will route all mutations from `default:mytable` columnFamily `mycf` to
+`mykafkatopic` unless they start with `'secret'`. Items matching that rule will be dropped.
+The way the rule is written, all other mutations for column family `mycf` will be routed
+to the `mykafka` topic.
 
-4. Service arguments
+## Service Arguments
 
---kafkabrokers (or -b) <kafka brokers (comma delmited)>
---routerulesfile (or -r) <file with rules to route to kafka (defaults to kafka-route-rules.xml)>
+```
+--kafkabrokers    (or -b) <kafka brokers (comma delmited)>
+--routerulesfile  (or -r) <file with rules to route to kafka (defaults to kafka-route-rules.xml)>
 --kafkaproperties (or -f) <Path to properties file that has the kafka connection properties>
---peername (or -p) name of hbase peer to use (defaults to hbasekafka)
---znode (or -z) root znode (defaults to /kafkaproxy)
---enablepeer (or -e) enable peer on startup (defaults to false)]
---auto (or -a) auto create peer
+--peername        (or -p) name of hbase peer to use (defaults to hbasekafka)
+--znode           (or -z) root znode (defaults to /kafkaproxy)
+--enablepeer      (or -e) enable peer on startup (defaults to false)]
+--auto            (or -a) auto create peer
+```
 
 
-5. start the service.
-   - make sure the hbase command is in your path
-   - ny default, the service looks for route-rules.xml in the conf directory, you can specify a
-     differeent file or location with the -r argument
+## Starting the Service.
+  * make sure the hbase command is in your path
+  * by default, the service looks for route-rules.xml in the conf directory. You can specify a different file or location with the `-r` argument
 
-bin/hbase-connectors-daemon.sh start kafkaproxy -a -e -p wootman -b localhost:9092 -r ~/kafka-route-rules.xml
-
-this:
-- starts the kafka proxy
-- passes the -a.  The proxy will create the replication peer specified by -p if it does not exist
-  (not required, but it savecs some busy work).
-- enables the peer (-e) the proxy will enable the peer when the service starts (not required, you can
-  manually enable the peer in the hbase shell)
 
+### Example
+```
+bin/hbase-connectors-daemon.sh start kafkaproxy -a -e -p wootman -b localhost:9092 -r ~/kafka-route-rules.xml
+```
 
-Notes:
-1. The proxy will connect to the zookeeper in hbase-site.xml by default.  You can override this by
-   passing -Dhbase.zookeeper.quorum
+This:
+ * starts the kafka proxy
+ * passes -a so proxy will create the replication peer specified by -p if it does not exist (not required, but it saves some busy work).
+ * enables the peer (-e) when the service starts (not required, you can manually enable the peer in the hbase shell)
 
- bin/hbase-connectors-daemon.sh start kafkaproxy -Dhbase.zookeeper.quorum=localhost:1234 ..... other args ....
+## Notes
 
-2. route rules only support unicode characters.
-3. I do not have access to a secured hadoop clsuter to test this on.
+  # The proxy will connect to the zookeeper in `hbase-site.xml` by default.  You can override this by passing `-Dhbase.zookeeper.quorum`
+  # Route rules only support unicode characters.
+  # I do not have access to a secured hadoop clsuter to test this on.
 
-Message format
+### Message Format
 
 Messages are in avro format, this is the schema:
 
-{"namespace": "org.apache.hadoop.hbase.kafka",
+```{"namespace": "org.apache.hadoop.hbase.kafka",
  "type": "record",
  "name": "HBaseKafkaEvent",
  "fields": [
@@ -106,21 +110,23 @@ Messages are in avro format, this is the schema:
     {"name": "family", "type": "bytes"},
     {"name": "table", "type": "bytes"}
  ]
-}
+}```
 
 Any language that supports Avro should be able to consume the messages off the topic.
 
 
-Testing Utility
+## Testing Utility
 
 A utility is included to test the routing rules.
 
+```
 bin/hbase-connectors-daemon.sh start kafkaproxytest -k <kafka.broker> -t <topic to listen to>
+```
 
-the messages will be dumped in string format under logs/
+The messages will be dumped in string format under `logs/`
 
-TODO:
-1. Some properties passed into the region server are hard-coded.
-2. The avro objects should be generic
-3. Allow rules to be refreshed without a restart
-4. Get this tested on a secure (TLS & Kerberos) enabled cluster.
\ No newline at end of file
+## TODO:
+  # Some properties passed into the region server are hard-coded.
+  # The avro objects should be generic
+  # Allow rules to be refreshed without a restart
+  # Get this tested on a secure (TLS & Kerberos) enabled cluster.