You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by ka...@apache.org on 2017/04/14 06:22:15 UTC

[1/2] storm git commit: Fix headers

Repository: storm
Updated Branches:
  refs/heads/1.x-branch 04dc84608 -> c05ba1dfd


Fix headers

Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/ff69ba6f
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/ff69ba6f
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/ff69ba6f

Branch: refs/heads/1.x-branch
Commit: ff69ba6f7ed00ed60759796c43f2683de135106b
Parents: 04dc846
Author: AZ <an...@izinenko.ru>
Authored: Fri Mar 24 12:56:08 2017 +0300
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Fri Apr 14 15:22:04 2017 +0900

----------------------------------------------------------------------
 external/storm-kafka/README.md | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/ff69ba6f/external/storm-kafka/README.md
----------------------------------------------------------------------
diff --git a/external/storm-kafka/README.md b/external/storm-kafka/README.md
index 5e86cce..3ba9b0c 100644
--- a/external/storm-kafka/README.md
+++ b/external/storm-kafka/README.md
@@ -3,15 +3,15 @@ Storm Kafka
 
 Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.
 
-##Spouts
+## Spouts
 We support both Trident and core Storm spouts. For both spout implementations, we use a BrokerHost interface that
 tracks Kafka broker host to partition mapping and kafkaConfig that controls some Kafka related parameters.
 
-###BrokerHosts
+### BrokerHosts
 In order to initialize your Kafka spout/emitter you need to construct an instance of the marker interface BrokerHosts.
 Currently, we support the following two implementations:
 
-####ZkHosts
+#### ZkHosts
 ZkHosts is what you should use if you want to dynamically track Kafka broker to partition mapping. This class uses
 Kafka's ZooKeeper entries to track brokerHost -> partition mapping. You can instantiate an object by calling
 ```java
@@ -24,7 +24,7 @@ partition information is stored. By default this is /brokers which is what the d
 By default, the broker-partition mapping is refreshed every 60 seconds from ZooKeeper. If you want to change it, you
 should set host.refreshFreqSecs to your chosen value.
 
-####StaticHosts
+#### StaticHosts
 This is an alternative implementation where broker -> partition information is static. In order to construct an instance
 of this class, you need to first construct an instance of GlobalPartitionInformation.
 
@@ -39,7 +39,7 @@ of this class, you need to first construct an instance of GlobalPartitionInforma
     StaticHosts hosts = new StaticHosts(partitionInfo);
 ```
 
-###KafkaConfig
+### KafkaConfig
 The second thing needed for constructing a kafkaSpout is an instance of KafkaConfig.
 ```java
     public KafkaConfig(BrokerHosts hosts, String topic)
@@ -98,7 +98,7 @@ The KafkaConfig class also has bunch of public variables that controls your appl
 ```
 
 Most of them are self explanatory except MultiScheme.
-###MultiScheme
+### MultiScheme
 MultiScheme is an interface that dictates how the ByteBuffer consumed from Kafka gets transformed into a storm tuple. It
 also controls the naming of your output field.
 
@@ -119,7 +119,7 @@ public Iterable<List<Object>> deserializeMessageWithMetadata(ByteBuffer message,
 
 This is useful for auditing/replaying messages from arbitrary points on a Kafka topic, saving the partition and offset of each message of a discrete stream instead of persisting the entire message.
 
-###Failed message retry
+### Failed message retry
 FailedMsgRetryManager is an interface which defines the retry strategy for a failed message. Default implementation is ExponentialBackoffMsgRetryManager which retries with exponential delays
 between consecutive retries. To use a custom implementation, set SpoutConfig.failedMsgRetryManagerClass to the full classname
 of implementation. Here is the interface 
@@ -253,14 +253,14 @@ When selecting a kafka dependency version, you should ensure -
  0.8.x broker.
 
 
-##Writing to Kafka as part of your topology
+## Writing to Kafka as part of your topology
 You can create an instance of org.apache.storm.kafka.bolt.KafkaBolt and attach it as a component to your topology or if you
 are using trident you can use org.apache.storm.kafka.trident.TridentState, org.apache.storm.kafka.trident.TridentStateFactory and
 org.apache.storm.kafka.trident.TridentKafkaUpdater.
 
 You need to provide implementation of following 2 interfaces
 
-###TupleToKafkaMapper and TridentTupleToKafkaMapper
+### TupleToKafkaMapper and TridentTupleToKafkaMapper
 These interfaces have 2 methods defined:
 
 ```java
@@ -276,7 +276,7 @@ reasons. Alternatively you could also specify a different key and message field
 In the TridentKafkaState you must specify what is the field name for key and message as there is no default constructor.
 These should be specified while constructing and instance of FieldNameBasedTupleToKafkaMapper.
 
-###KafkaTopicSelector and trident KafkaTopicSelector
+### KafkaTopicSelector and trident KafkaTopicSelector
 This interface has only one method
 ```java
 public interface KafkaTopicSelector {
@@ -295,7 +295,7 @@ Please make sure the default topic have created .
 You can provide all the produce properties in your Storm topology by calling `KafkaBolt.withProducerProperties()` and `TridentKafkaStateFactory.withProducerProperties()`. Please see  http://kafka.apache.org/documentation.html#newproducerconfigs
 Section "Important configuration properties for the producer" for more details.
 
-###Using wildcard kafka topic match
+### Using wildcard kafka topic match
 You can do a wildcard topic match by adding the following config
 ```
      Config config = new Config();
@@ -306,7 +306,7 @@ You can do a wildcard topic match by adding the following config
 After this you can specify a wildcard topic for matching e.g. clickstream.*.log.  This will match all streams matching clickstream.my.log, clickstream.cart.log etc
 
 
-###Putting it all together
+### Putting it all together
 
 For the bolt :
 ```java


[2/2] storm git commit: Merge branch 'pr-2028-1.x' into 1.x-branch

Posted by ka...@apache.org.
Merge branch 'pr-2028-1.x' into 1.x-branch


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/c05ba1df
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/c05ba1df
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/c05ba1df

Branch: refs/heads/1.x-branch
Commit: c05ba1dfd77752c452be9373783c323d3941a4b8
Parents: 04dc846 ff69ba6
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Fri Apr 14 15:22:09 2017 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Fri Apr 14 15:22:09 2017 +0900

----------------------------------------------------------------------
 external/storm-kafka/README.md | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)
----------------------------------------------------------------------