You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by ka...@apache.org on 2016/12/27 15:20:55 UTC

[1/3] storm git commit: [STORM-2082][SQL] add sql external module storm-sql-hdfs

Repository: storm
Updated Branches:
  refs/heads/1.x-branch 2aed0f6c7 -> a37e2d3e9


[STORM-2082][SQL] add sql external module storm-sql-hdfs

* resolve conflicts by Jungtaek Lim <ka...@gmail.com>


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/c3b6b038
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/c3b6b038
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/c3b6b038

Branch: refs/heads/1.x-branch
Commit: c3b6b0387733885130b1ba13e35114d110e19dbd
Parents: 2aed0f6
Author: Xin Wang <be...@163.com>
Authored: Sun Nov 13 18:25:27 2016 +0800
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Wed Dec 28 00:18:53 2016 +0900

----------------------------------------------------------------------
 docs/storm-sql-reference.md                     |  22 ++-
 external/sql/pom.xml                            |   1 +
 .../storm-sql-external/storm-sql-hdfs/pom.xml   | 104 ++++++++++++++
 .../storm/sql/hdfs/HdfsDataSourcesProvider.java | 135 +++++++++++++++++++
 ...apache.storm.sql.runtime.DataSourcesProvider |  16 +++
 .../sql/hdfs/TestHdfsDataSourcesProvider.java   | 129 ++++++++++++++++++
 storm-dist/binary/src/main/assembly/binary.xml  |   7 +
 7 files changed, 413 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/c3b6b038/docs/storm-sql-reference.md
----------------------------------------------------------------------
diff --git a/docs/storm-sql-reference.md b/docs/storm-sql-reference.md
index 1accf55..c14af1f 100644
--- a/docs/storm-sql-reference.md
+++ b/docs/storm-sql-reference.md
@@ -1272,6 +1272,7 @@ Please note that it supports only one letter for delimiter.
 | Kafka | org.apache.storm:storm-sql-kafka | `kafka://zkhost:port/broker_path?topic=topic` | Yes | Yes | Yes
 | Redis | org.apache.storm:storm-sql-redis | `redis://:[password]@host:port/[dbIdx]` | No | Yes | Yes
 | MongoDB | org.apache.stormg:storm-sql-mongodb | `mongodb://[username:password@]host1[:port1][,host2[:port2],...[,hostN[:portN]]][/[database][?options]]` | No | Yes | Yes
+| HDFS | org.apache.storm:storm-sql-hdfs | `hdfs://host:port/path-to-file` | No | Yes | Yes
 
 #### Socket
 
@@ -1321,4 +1322,23 @@ You can use below as working reference for `--artifacts` option, and change depe
 
 `org.apache.storm:storm-sql-mongodb:2.0.0-SNAPSHOT,org.apache.storm:storm-mongodb:2.0.0-SNAPSHOT`
 
-Storing record with preserving fields are not supported for now.
\ No newline at end of file
+Storing record with preserving fields are not supported for now.
+
+#### HDFS
+
+HDFS data source requires below properties to be set:
+
+* `hdfs.file.path`: HDFS file path
+* `hdfs.file.name`: HDFS file name - please refer to [SimpleFileNameFormat]({{page.git-blob-base}}/external/storm-hdfs/src/main/java/org/apache/storm/hdfs/trident/format/SimpleFileNameFormat.java)
+* `hdfs.rotation.size.kb`: HDFS FileSizeRotationPolicy in KB
+* `hdfs.rotation.time.seconds`: HDFS TimedRotationPolicy in seconds
+
+Please note that `hdfs.rotation.size.kb` and `hdfs.rotation.time.seconds` only one can be used for hdfs rotation.
+
+And note that `storm-sql-hdfs` requires users to provide `storm-hdfs`.
+You can use below as working reference for `--artifacts` option, and change dependencies version if really needed:
+
+`org.apache.storm:storm-sql-hdfs:2.0.0-SNAPSHOT,org.apache.storm:storm-hdfs:2.0.0-SNAPSHOT`
+
+Also, hdfs configuration files should be provided.
+You can put the `core-site.xml` and `hdfs-site.xml` into the `conf` directory which is in Storm installation directory.

http://git-wip-us.apache.org/repos/asf/storm/blob/c3b6b038/external/sql/pom.xml
----------------------------------------------------------------------
diff --git a/external/sql/pom.xml b/external/sql/pom.xml
index 3fd6a34..e4793ca 100644
--- a/external/sql/pom.xml
+++ b/external/sql/pom.xml
@@ -42,5 +42,6 @@
         <module>storm-sql-external/storm-sql-kafka</module>
         <module>storm-sql-external/storm-sql-redis</module>
         <module>storm-sql-external/storm-sql-mongodb</module>
+        <module>storm-sql-external/storm-sql-hdfs</module>
     </modules>
 </project>

http://git-wip-us.apache.org/repos/asf/storm/blob/c3b6b038/external/sql/storm-sql-external/storm-sql-hdfs/pom.xml
----------------------------------------------------------------------
diff --git a/external/sql/storm-sql-external/storm-sql-hdfs/pom.xml b/external/sql/storm-sql-external/storm-sql-hdfs/pom.xml
new file mode 100644
index 0000000..f6a7e63
--- /dev/null
+++ b/external/sql/storm-sql-external/storm-sql-hdfs/pom.xml
@@ -0,0 +1,104 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+    <modelVersion>4.0.0</modelVersion>
+
+    <parent>
+        <artifactId>storm</artifactId>
+        <groupId>org.apache.storm</groupId>
+        <version>1.1.0-SNAPSHOT</version>
+        <relativePath>../../../../pom.xml</relativePath>
+    </parent>
+
+    <artifactId>storm-sql-hdfs</artifactId>
+
+    <developers>
+        <developer>
+            <id>vesense</id>
+            <name>Xin Wang</name>
+            <email>data.xinwang@gmail.com</email>
+        </developer>
+    </developers>
+
+    <dependencies>
+        <dependency>
+            <groupId>org.apache.storm</groupId>
+            <artifactId>storm-core</artifactId>
+            <version>${project.version}</version>
+            <scope>provided</scope>
+            <exclusions>
+                <!--log4j-over-slf4j must be excluded for hadoop-minicluster
+                    see: http://stackoverflow.com/q/20469026/3542091 -->
+                <exclusion>
+                    <groupId>org.slf4j</groupId>
+                    <artifactId>log4j-over-slf4j</artifactId>
+                </exclusion>
+            </exclusions>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.storm</groupId>
+            <artifactId>storm-sql-runtime</artifactId>
+            <version>${project.version}</version>
+            <scope>provided</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.storm</groupId>
+            <artifactId>storm-sql-runtime</artifactId>
+            <version>${project.version}</version>
+            <scope>test</scope>
+            <type>test-jar</type>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.storm</groupId>
+            <artifactId>storm-hdfs</artifactId>
+            <version>${project.version}</version>
+            <scope>provided</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.apache.hadoop</groupId>
+            <artifactId>hadoop-minicluster</artifactId>
+            <version>${hadoop.version}</version>
+            <exclusions>
+                <exclusion>
+                    <groupId>org.slf4j</groupId>
+                    <artifactId>slf4j-log4j12</artifactId>
+                </exclusion>
+            </exclusions>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>junit</groupId>
+            <artifactId>junit</artifactId>
+            <scope>test</scope>
+        </dependency>
+        <dependency>
+            <groupId>org.mockito</groupId>
+            <artifactId>mockito-all</artifactId>
+            <scope>test</scope>
+        </dependency>
+    </dependencies>
+    <build>
+        <sourceDirectory>src/jvm</sourceDirectory>
+        <testSourceDirectory>src/test</testSourceDirectory>
+        <resources>
+            <resource>
+                <directory>${basedir}/src/resources</directory>
+            </resource>
+        </resources>
+    </build>
+</project>

http://git-wip-us.apache.org/repos/asf/storm/blob/c3b6b038/external/sql/storm-sql-external/storm-sql-hdfs/src/jvm/org/apache/storm/sql/hdfs/HdfsDataSourcesProvider.java
----------------------------------------------------------------------
diff --git a/external/sql/storm-sql-external/storm-sql-hdfs/src/jvm/org/apache/storm/sql/hdfs/HdfsDataSourcesProvider.java b/external/sql/storm-sql-external/storm-sql-hdfs/src/jvm/org/apache/storm/sql/hdfs/HdfsDataSourcesProvider.java
new file mode 100644
index 0000000..38c3fcb
--- /dev/null
+++ b/external/sql/storm-sql-external/storm-sql-hdfs/src/jvm/org/apache/storm/sql/hdfs/HdfsDataSourcesProvider.java
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.sql.hdfs;
+
+import com.google.common.base.Preconditions;
+import org.apache.storm.hdfs.trident.HdfsState;
+import org.apache.storm.hdfs.trident.HdfsStateFactory;
+import org.apache.storm.hdfs.trident.HdfsUpdater;
+import org.apache.storm.hdfs.trident.format.FileNameFormat;
+import org.apache.storm.hdfs.trident.format.RecordFormat;
+import org.apache.storm.hdfs.trident.format.SimpleFileNameFormat;
+import org.apache.storm.hdfs.trident.rotation.FileRotationPolicy;
+import org.apache.storm.hdfs.trident.rotation.FileSizeRotationPolicy;
+import org.apache.storm.hdfs.trident.rotation.TimedRotationPolicy;
+import org.apache.storm.sql.runtime.DataSource;
+import org.apache.storm.sql.runtime.DataSourcesProvider;
+import org.apache.storm.sql.runtime.FieldInfo;
+import org.apache.storm.sql.runtime.IOutputSerializer;
+import org.apache.storm.sql.runtime.ISqlTridentDataSource;
+import org.apache.storm.sql.runtime.SimpleSqlTridentConsumer;
+import org.apache.storm.sql.runtime.utils.FieldInfoUtils;
+import org.apache.storm.sql.runtime.utils.SerdeUtils;
+import org.apache.storm.trident.spout.ITridentDataSource;
+import org.apache.storm.trident.state.StateFactory;
+import org.apache.storm.trident.state.StateUpdater;
+import org.apache.storm.trident.tuple.TridentTuple;
+
+import java.net.URI;
+import java.util.List;
+import java.util.Properties;
+
+/**
+ * Create a HDFS sink based on the URI and properties. The URI has the format of hdfs://host:port/path-to-file
+ * The properties are in JSON format which specifies the name / path of the hdfs file and etc.
+ */
+public class HdfsDataSourcesProvider implements DataSourcesProvider {
+
+  private static class HdfsTridentDataSource implements ISqlTridentDataSource {
+    private final String url;
+    private final Properties props;
+    private final IOutputSerializer serializer;
+
+    private HdfsTridentDataSource(String url, Properties props, IOutputSerializer serializer) {
+      this.url = url;
+      this.props = props;
+      this.serializer = serializer;
+    }
+
+    @Override
+    public ITridentDataSource getProducer() {
+      throw new UnsupportedOperationException(this.getClass().getName() + " doesn't provide Producer");
+    }
+
+    @Override
+    public SqlTridentConsumer getConsumer() {
+      FileNameFormat fileNameFormat = new SimpleFileNameFormat()
+          .withPath(props.getProperty("hdfs.file.path", "/storm"))
+          .withName(props.getProperty("hdfs.file.name", "$TIME.$NUM.txt"));
+
+      RecordFormat recordFormat = new TridentRecordFormat(serializer);
+
+      FileRotationPolicy rotationPolicy;
+      String size = props.getProperty("hdfs.rotation.size.kb");
+      String interval = props.getProperty("hdfs.rotation.time.seconds");
+      Preconditions.checkArgument(size != null || interval != null, "Hdfs data source must contain file rotation config");
+
+      if (size != null) {
+        rotationPolicy = new FileSizeRotationPolicy(Float.parseFloat(size), FileSizeRotationPolicy.Units.KB);
+      } else {
+        rotationPolicy = new TimedRotationPolicy(Float.parseFloat(interval), TimedRotationPolicy.TimeUnit.SECONDS);
+      }
+
+      HdfsState.Options options = new HdfsState.HdfsFileOptions()
+          .withFileNameFormat(fileNameFormat)
+          .withRecordFormat(recordFormat)
+          .withRotationPolicy(rotationPolicy)
+          .withFsUrl(url);
+
+      StateFactory stateFactory = new HdfsStateFactory().withOptions(options);
+      StateUpdater stateUpdater = new HdfsUpdater();
+
+      return new SimpleSqlTridentConsumer(stateFactory, stateUpdater);
+    }
+  }
+
+  private static class TridentRecordFormat implements RecordFormat {
+    private final IOutputSerializer serializer;
+
+    private TridentRecordFormat(IOutputSerializer serializer) {
+      this.serializer = serializer;
+    }
+
+    @Override
+    public byte[] format(TridentTuple tuple) {
+      //TODO we should handle '\n'. ref DelimitedRecordFormat
+      return serializer.write(tuple.getValues(), null).array();
+    }
+
+  }
+
+  @Override
+  public String scheme() {
+    return "hdfs";
+  }
+
+  @Override
+  public DataSource construct(URI uri, String inputFormatClass, String outputFormatClass,
+                              List<FieldInfo> fields) {
+    throw new UnsupportedOperationException();
+  }
+
+  @Override
+  public ISqlTridentDataSource constructTrident(URI uri, String inputFormatClass, String outputFormatClass,
+                                                Properties properties, List<FieldInfo> fields) {
+    List<String> fieldNames = FieldInfoUtils.getFieldNames(fields);
+    IOutputSerializer serializer = SerdeUtils.getSerializer(outputFormatClass, properties, fieldNames);
+    return new HdfsTridentDataSource(uri.toString(), properties, serializer);
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/storm/blob/c3b6b038/external/sql/storm-sql-external/storm-sql-hdfs/src/resources/META-INF/services/org.apache.storm.sql.runtime.DataSourcesProvider
----------------------------------------------------------------------
diff --git a/external/sql/storm-sql-external/storm-sql-hdfs/src/resources/META-INF/services/org.apache.storm.sql.runtime.DataSourcesProvider b/external/sql/storm-sql-external/storm-sql-hdfs/src/resources/META-INF/services/org.apache.storm.sql.runtime.DataSourcesProvider
new file mode 100644
index 0000000..5fac84f
--- /dev/null
+++ b/external/sql/storm-sql-external/storm-sql-hdfs/src/resources/META-INF/services/org.apache.storm.sql.runtime.DataSourcesProvider
@@ -0,0 +1,16 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+org.apache.storm.sql.hdfs.HdfsDataSourcesProvider
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/storm/blob/c3b6b038/external/sql/storm-sql-external/storm-sql-hdfs/src/test/org/apache/storm/sql/hdfs/TestHdfsDataSourcesProvider.java
----------------------------------------------------------------------
diff --git a/external/sql/storm-sql-external/storm-sql-hdfs/src/test/org/apache/storm/sql/hdfs/TestHdfsDataSourcesProvider.java b/external/sql/storm-sql-external/storm-sql-hdfs/src/test/org/apache/storm/sql/hdfs/TestHdfsDataSourcesProvider.java
new file mode 100644
index 0000000..1473438
--- /dev/null
+++ b/external/sql/storm-sql-external/storm-sql-hdfs/src/test/org/apache/storm/sql/hdfs/TestHdfsDataSourcesProvider.java
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.sql.hdfs;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileUtil;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.storm.hdfs.trident.HdfsState;
+import org.apache.storm.hdfs.trident.HdfsStateFactory;
+import org.apache.storm.hdfs.trident.HdfsUpdater;
+import org.apache.storm.sql.runtime.DataSourcesRegistry;
+import org.apache.storm.sql.runtime.FieldInfo;
+import org.apache.storm.sql.runtime.ISqlTridentDataSource;
+import org.apache.storm.trident.state.StateUpdater;
+import org.apache.storm.trident.tuple.TridentTuple;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.internal.util.reflection.Whitebox;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Properties;
+
+import static org.apache.storm.hdfs.trident.HdfsState.HdfsFileOptions;
+import static org.mockito.Mockito.doReturn;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+
+public class TestHdfsDataSourcesProvider {
+  private static final List<FieldInfo> FIELDS = ImmutableList.of(
+      new FieldInfo("ID", int.class, true),
+      new FieldInfo("val", String.class, false));
+  private static final Properties TBL_PROPERTIES = new Properties();
+
+  private static String hdfsURI;
+  private static MiniDFSCluster hdfsCluster;
+
+  static {
+    TBL_PROPERTIES.put("hdfs.file.path", "/unittest");
+    TBL_PROPERTIES.put("hdfs.file.name", "test1.txt");
+    TBL_PROPERTIES.put("hdfs.rotation.time.seconds", "120");
+  }
+
+  @Before
+  public void setup() throws Exception {
+    Configuration conf = new Configuration();
+    conf.set("fs.trash.interval", "10");
+    conf.setBoolean("dfs.permissions", true);
+    File baseDir = new File("./target/hdfs/").getAbsoluteFile();
+    FileUtil.fullyDelete(baseDir);
+    conf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, baseDir.getAbsolutePath());
+
+    MiniDFSCluster.Builder builder = new MiniDFSCluster.Builder(conf);
+    hdfsCluster = builder.build();
+    hdfsURI = "hdfs://localhost:" + hdfsCluster.getNameNodePort() + "/";
+  }
+
+  @After
+  public void shutDown() throws IOException {
+    hdfsCluster.shutdown();
+  }
+
+  @SuppressWarnings("unchecked")
+  @Test
+  public void testHdfsSink() {
+    ISqlTridentDataSource ds = DataSourcesRegistry.constructTridentDataSource(
+            URI.create(hdfsURI), null, null, TBL_PROPERTIES, FIELDS);
+    Assert.assertNotNull(ds);
+
+    ISqlTridentDataSource.SqlTridentConsumer consumer = ds.getConsumer();
+
+    Assert.assertEquals(HdfsStateFactory.class, consumer.getStateFactory().getClass());
+    Assert.assertEquals(HdfsUpdater.class, consumer.getStateUpdater().getClass());
+
+    HdfsState state = (HdfsState) consumer.getStateFactory().makeState(Collections.emptyMap(), null, 0, 1);
+    StateUpdater stateUpdater = consumer.getStateUpdater();
+
+    HdfsFileOptions options = mock(HdfsFileOptions.class);
+    Whitebox.setInternalState(state, "options", options);
+
+    List<TridentTuple> tupleList = mockTupleList();
+
+    for (TridentTuple t : tupleList) {
+      stateUpdater.updateState(state, Collections.singletonList(t), null);
+      try {
+        verify(options).execute(Collections.singletonList(t));
+      } catch (IOException e) {
+        throw new RuntimeException(e);
+      }
+    }
+  }
+
+  private static List<TridentTuple> mockTupleList() {
+    List<TridentTuple> tupleList = new ArrayList<>();
+    TridentTuple t0 = mock(TridentTuple.class);
+    TridentTuple t1 = mock(TridentTuple.class);
+    doReturn(1).when(t0).get(0);
+    doReturn(2).when(t1).get(0);
+    doReturn(Lists.<Object>newArrayList(1, "2")).when(t0).getValues();
+    doReturn(Lists.<Object>newArrayList(2, "3")).when(t1).getValues();
+    tupleList.add(t0);
+    tupleList.add(t1);
+    return tupleList;
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/storm/blob/c3b6b038/storm-dist/binary/src/main/assembly/binary.xml
----------------------------------------------------------------------
diff --git a/storm-dist/binary/src/main/assembly/binary.xml b/storm-dist/binary/src/main/assembly/binary.xml
index 66593e2..40c2905 100644
--- a/storm-dist/binary/src/main/assembly/binary.xml
+++ b/storm-dist/binary/src/main/assembly/binary.xml
@@ -319,6 +319,13 @@
             </includes>
         </fileSet>
         <fileSet>
+            <directory>${project.basedir}/../../external/sql/storm-sql-external/storm-sql-hdfs/target</directory>
+            <outputDirectory>external/sql/storm-sql-external/storm-sql-hdfs</outputDirectory>
+            <includes>
+                <include>storm*jar</include>
+            </includes>
+        </fileSet>
+        <fileSet>
             <directory>${project.basedir}/../../external/sql/storm-sql-runtime/target/app-assembler/repo</directory>
             <outputDirectory>external/sql/storm-sql-runtime</outputDirectory>
             <includes>


[2/3] storm git commit: Merge branch 'STORM-2082-1.x-merge' into 1.x-branch

Posted by ka...@apache.org.
Merge branch 'STORM-2082-1.x-merge' into 1.x-branch


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/00e7972d
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/00e7972d
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/00e7972d

Branch: refs/heads/1.x-branch
Commit: 00e7972dcbab90ca1da53b9ebcacaf28d1072746
Parents: 2aed0f6 c3b6b03
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Wed Dec 28 00:19:22 2016 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Wed Dec 28 00:19:22 2016 +0900

----------------------------------------------------------------------
 docs/storm-sql-reference.md                     |  22 ++-
 external/sql/pom.xml                            |   1 +
 .../storm-sql-external/storm-sql-hdfs/pom.xml   | 104 ++++++++++++++
 .../storm/sql/hdfs/HdfsDataSourcesProvider.java | 135 +++++++++++++++++++
 ...apache.storm.sql.runtime.DataSourcesProvider |  16 +++
 .../sql/hdfs/TestHdfsDataSourcesProvider.java   | 129 ++++++++++++++++++
 storm-dist/binary/src/main/assembly/binary.xml  |   7 +
 7 files changed, 413 insertions(+), 1 deletion(-)
----------------------------------------------------------------------



[3/3] storm git commit: STORM-2082: CHANGELOG

Posted by ka...@apache.org.
STORM-2082: CHANGELOG


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/a37e2d3e
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/a37e2d3e
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/a37e2d3e

Branch: refs/heads/1.x-branch
Commit: a37e2d3e9ffc8f047dc3c31b2d8a0fa657de0594
Parents: 00e7972
Author: Jungtaek Lim <ka...@gmail.com>
Authored: Wed Dec 28 00:20:13 2016 +0900
Committer: Jungtaek Lim <ka...@gmail.com>
Committed: Wed Dec 28 00:20:13 2016 +0900

----------------------------------------------------------------------
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/a37e2d3e/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index a9e5305..d5effe0 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,5 @@
 ## 1.1.0
+ * STORM-2082: add sql external module storm-sql-hdfs
  * STORM-2256: storm-pmml breaks on java 1.7
  * STORM-2223: PMML Bolt.
  * STORM-2222: Repeated NPEs thrown in nimbus if rebalance fails