You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@carbondata.apache.org by ra...@apache.org on 2017/10/31 07:00:14 UTC

[16/22] carbondata git commit: [CARBONDATA-1598] Remove all spark 1.x info(CI, README, documents)

[CARBONDATA-1598] Remove all spark 1.x info(CI, README, documents)

This closes #1423


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/5a67c98f
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/5a67c98f
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/5a67c98f

Branch: refs/heads/fgdatamap
Commit: 5a67c98f1072c656acebf1b853e45c1f1b5d203f
Parents: 7036696
Author: chenliang613 <ch...@apache.org>
Authored: Thu Oct 19 18:10:29 2017 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Mon Oct 23 10:20:16 2017 +0530

----------------------------------------------------------------------
 README.md                             |  2 -
 assembly/pom.xml                      | 21 ----------
 build/README.md                       | 15 ++++----
 docs/quick-start-guide.md             | 62 ------------------------------
 integration/spark-common-test/pom.xml | 22 -----------
 5 files changed, 8 insertions(+), 114 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/5a67c98f/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index 06f1ce5..297cece 100644
--- a/README.md
+++ b/README.md
@@ -29,8 +29,6 @@ You can find the latest CarbonData document and learn more at:
 ## Status
 Spark2.1:
 [![Build Status](https://builds.apache.org/buildStatus/icon?job=carbondata-master-spark-2.1)](https://builds.apache.org/view/CarbonData/job/carbondata-master-spark-2.1/)
-Spark1.6:
-[![Build Status](https://builds.apache.org/buildStatus/icon?job=carbondata-master-spark-1.6)](https://builds.apache.org/view/CarbonData/job/carbondata-master-spark-1.6/)
 
 ## Features
 CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features:

http://git-wip-us.apache.org/repos/asf/carbondata/blob/5a67c98f/assembly/pom.xml
----------------------------------------------------------------------
diff --git a/assembly/pom.xml b/assembly/pom.xml
index d705b66..b5652a5 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -124,29 +124,8 @@
       </plugin>
     </plugins>
   </build>
-
   <profiles>
     <profile>
-      <id>spark-1.5</id>
-      <dependencies>
-        <dependency>
-          <groupId>org.apache.carbondata</groupId>
-          <artifactId>carbondata-spark</artifactId>
-          <version>${project.version}</version>
-        </dependency>
-      </dependencies>
-    </profile>
-    <profile>
-      <id>spark-1.6</id>
-      <dependencies>
-        <dependency>
-          <groupId>org.apache.carbondata</groupId>
-          <artifactId>carbondata-spark</artifactId>
-          <version>${project.version}</version>
-        </dependency>
-      </dependencies>
-    </profile>
-    <profile>
       <id>spark-2.1</id>
       <!-- default -->
       <activation>

http://git-wip-us.apache.org/repos/asf/carbondata/blob/5a67c98f/build/README.md
----------------------------------------------------------------------
diff --git a/build/README.md b/build/README.md
index 5fa6814..50f6ce2 100644
--- a/build/README.md
+++ b/build/README.md
@@ -27,12 +27,17 @@
 * [Apache Thrift 0.9.3](http://archive.apache.org/dist/thrift/0.9.3/)
 
 ## Build command
-Build without test,by default carbondata takes Spark 1.6.2 to build the project
+From 1.3.0 onwards, CarbonData supports spark 2.x, build without test,by default carbondata takes Spark 2.1.0 to build the project
 ```
 mvn -DskipTests clean package
 ```
 
-Build with different supported versions of Spark.
+Build with test
+```
+mvn clean package
+```
+
+Before 1.3.0, build with different supported versions of Spark
 ```
 mvn -DskipTests -Pspark-1.5 -Dspark.version=1.5.1 clean package
 mvn -DskipTests -Pspark-1.5 -Dspark.version=1.5.2 clean package
@@ -44,13 +49,9 @@ mvn -DskipTests -Pspark-1.6 -Dspark.version=1.6.3 clean package
 mvn -DskipTests -Pspark-2.1 -Dspark.version=2.1.0 clean package
 ```
 
-Build with test
-```
-mvn clean package
-```
 
 ## For contributors : To build the format code after any changes, please follow the below command.
 Note:Need install Apache Thrift 0.9.3
 ```
-mvn clean -DskipTests -Pbuild-with-format -Pspark-1.6 package
+mvn clean -DskipTests -Pbuild-with-format -Pspark-2.1 package
 ```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/carbondata/blob/5a67c98f/docs/quick-start-guide.md
----------------------------------------------------------------------
diff --git a/docs/quick-start-guide.md b/docs/quick-start-guide.md
index 5b39b9e..d833679 100644
--- a/docs/quick-start-guide.md
+++ b/docs/quick-start-guide.md
@@ -98,65 +98,3 @@ scala>carbon.sql("SELECT city, avg(age), sum(age)
                   FROM test_table
                   GROUP BY city").show()
 ```
-
-## Interactive Analysis with Spark Shell Version 1.6
-
-#### Basics
-
-Start Spark shell by running the following command in the Spark directory:
-
-```
-./bin/spark-shell --jars <carbondata assembly jar path>
-```
-**NOTE**: Assembly jar will be available after [building CarbonData](https://github.com/apache/carbondata/blob/master/build/README.md) and can be copied from `./assembly/target/scala-2.1x/carbondata_xxx.jar`
-
-**NOTE**: In this shell, SparkContext is readily available as `sc`.
-
-* In order to execute the Queries we need to import CarbonContext:
-
-```
-import org.apache.spark.sql.CarbonContext
-```
-
-* Create an instance of CarbonContext in the following manner :
-
-```
-val cc = new CarbonContext(sc, "<hdfs store path>")
-```
-**NOTE**: If running on local machine without hdfs, configure the local machine's store path instead of hdfs store path
-
-#### Executing Queries
-
-###### Creating a Table
-
-```
-scala>cc.sql("CREATE TABLE
-              IF NOT EXISTS test_table (
-                         id string,
-                         name string,
-                         city string,
-                         age Int)
-              STORED BY 'carbondata'")
-```
-To see the table created :
-
-```
-scala>cc.sql("SHOW TABLES").show()
-```
-
-###### Loading Data to a Table
-
-```
-scala>cc.sql("LOAD DATA INPATH 'sample.csv file path'
-              INTO TABLE test_table")
-```
-**NOTE**: Please provide the real file path of `sample.csv` for the above script.
-
-###### Querying Data from a Table
-
-```
-scala>cc.sql("SELECT * FROM test_table").show()
-scala>cc.sql("SELECT city, avg(age), sum(age)
-              FROM test_table
-              GROUP BY city").show()
-```

http://git-wip-us.apache.org/repos/asf/carbondata/blob/5a67c98f/integration/spark-common-test/pom.xml
----------------------------------------------------------------------
diff --git a/integration/spark-common-test/pom.xml b/integration/spark-common-test/pom.xml
index b2ee316..8806c0a 100644
--- a/integration/spark-common-test/pom.xml
+++ b/integration/spark-common-test/pom.xml
@@ -328,28 +328,6 @@
   </build>
   <profiles>
     <profile>
-      <id>spark-1.5</id>
-      <dependencies>
-        <dependency>
-          <groupId>org.apache.carbondata</groupId>
-          <artifactId>carbondata-spark</artifactId>
-          <version>${project.version}</version>
-          <scope>test</scope>
-        </dependency>
-      </dependencies>
-    </profile>
-    <profile>
-      <id>spark-1.6</id>
-      <dependencies>
-        <dependency>
-          <groupId>org.apache.carbondata</groupId>
-          <artifactId>carbondata-spark</artifactId>
-          <version>${project.version}</version>
-          <scope>test</scope>
-        </dependency>
-      </dependencies>
-    </profile>
-    <profile>
       <id>spark-2.1</id>
       <activation>
         <activeByDefault>true</activeByDefault>