You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by ka...@apache.org on 2016/08/24 02:10:51 UTC

storm git commit: Fix storm-sql readme to reflect improvement of submission (STORM-2016)

Repository: storm
Updated Branches:
  refs/heads/1.x-branch f741d9b48 -> a3090d313


Fix storm-sql readme to reflect improvement of submission (STORM-2016)


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/a3090d31
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/a3090d31
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/a3090d31

Branch: refs/heads/1.x-branch
Commit: a3090d3135d6c33c2802692cb9f7bc4c0e889937
Parents: f741d9b
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Tue Aug 23 19:02:54 2016 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Wed Aug 24 11:10:41 2016 +0900

----------------------------------------------------------------------
 docs/storm-sql.md      | 16 ++++------------
 external/sql/README.md | 14 +++-----------
 2 files changed, 7 insertions(+), 23 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/a3090d31/docs/storm-sql.md
----------------------------------------------------------------------
diff --git a/docs/storm-sql.md b/docs/storm-sql.md
index 3ad9805..8c94045 100644
--- a/docs/storm-sql.md
+++ b/docs/storm-sql.md
@@ -69,22 +69,14 @@ Current implementation of `storm-sql-kafka` requires specifying both `LOCATION`
 
 Similarly, the second statement specifies the table `LARGE_ORDERS` which represents the output stream. The third statement is a `SELECT` statement which defines the topology: it instructs StormSQL to filter all orders in the external table `ORDERS`, calculates the total price and inserts matching records into the Kafka stream specified by `LARGE_ORDER`.
 
-To run this example, users need to include the data sources (`storm-sql-kafka` in this case) and its dependency in the class path. One approach is to put the required jars into the `extlib` directory:
+To run this example, users need to include the data sources (`storm-sql-kafka` in this case) and its dependency in the
+class path. Dependencies for Storm SQL are automatically handled when users run `storm sql`. Users can include data sources at the submission step like below:
 
 ```
-$ cp curator-client-2.5.0.jar curator-framework-2.5.0.jar zookeeper-3.4.6.jar
- extlib/
-$ cp scala-library-2.10.4.jar kafka-clients-0.8.2.1.jar kafka_2.10-0.8.2.1.jar metrics-core-2.2.0.jar extlib/
-$ cp json-simple-1.1.1.jar extlib/
-$ cp jackson-annotations-2.6.0.jar extlib/
-$ cp storm-kafka-*.jar storm-sql-kafka-*.jar storm-sql-runtime-*.jar extlib/
+$ bin/storm sql order_filtering.sql order_filtering --artifacts "org.apache.storm:storm-sql-kafka:2.0.0-SNAPSHOT,org.apache.storm:storm-kafka:2.0.0-SNAPSHOT,org.apache.kafka:kafka_2.10:0.8.2.2\!org.slf4j:slf4j-log4j12,org.apache.kafka:kafka-clients:0.8.2.2"
 ```
 
-The next step is to submit the SQL statements to StormSQL:
-
-```
-$ bin/storm sql order_filtering order_filtering.sql
-```
+Above command submits the SQL statements to StormSQL. Users need to modify each artifacts' version if users are using different version of Storm or Kafka. 
 
 By now you should be able to see the `order_filtering` topology in the Storm UI.
 

http://git-wip-us.apache.org/repos/asf/storm/blob/a3090d31/external/sql/README.md
----------------------------------------------------------------------
diff --git a/external/sql/README.md b/external/sql/README.md
index 6f68951..2ac44a5 100644
--- a/external/sql/README.md
+++ b/external/sql/README.md
@@ -67,21 +67,13 @@ table `ORDERS`, calculates the total price and inserts matching records into the
 `LARGE_ORDER`.
 
 To run this example, users need to include the data sources (`storm-sql-kafka` in this case) and its dependency in the
-class path. One approach is to put the required jars into the `extlib` directory:
+class path. Dependencies for Storm SQL are automatically handled when users run `storm sql`. Users can include data sources at the submission step like below:
 
 ```
-$ cp curator-client-2.5.0.jar curator-framework-2.5.0.jar zookeeper-3.4.6.jar
- extlib/
-$ cp scala-library-2.10.4.jar kafka-clients-0.8.2.1.jar kafka_2.10-0.8.2.1.jar metrics-core-2.2.0.jar extlib/
-$ cp json-simple-1.1.1.jar extlib/
-$ cp storm-kafka-*.jar storm-sql-kafka-*.jar storm-sql-runtime-*.jar extlib/
+$ bin/storm sql order_filtering.sql order_filtering --artifacts "org.apache.storm:storm-sql-kafka:2.0.0-SNAPSHOT,org.apache.storm:storm-kafka:2.0.0-SNAPSHOT,org.apache.kafka:kafka_2.10:0.8.2.2\!org.slf4j:slf4j-log4j12,org.apache.kafka:kafka-clients:0.8.2.2"
 ```
 
-The next step is to submit the SQL statements to StormSQL:
-
-```
-$ bin/storm sql order_filtering order_filtering.sql
-```
+Above command submits the SQL statements to StormSQL. Users need to modify each artifacts' version if users are using different version of Storm or Kafka. 
 
 By now you should be able to see the `order_filtering` topology in the Storm UI.