You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@carbondata.apache.org by ra...@apache.org on 2018/12/07 12:26:42 UTC

[1/8] carbondata-site git commit: Added 1.5.1 version information

Repository: carbondata-site
Updated Branches:
  refs/heads/asf-site 4574eccb4 -> ae77df2e4


http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/ddl-of-carbondata.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/ddl-of-carbondata.md b/src/site/markdown/ddl-of-carbondata.md
index 933a448..965f11c 100644
--- a/src/site/markdown/ddl-of-carbondata.md
+++ b/src/site/markdown/ddl-of-carbondata.md
@@ -33,7 +33,9 @@ CarbonData DDL statements are documented here,which includes:
   * [Hive/Parquet folder Structure](#support-flat-folder-same-as-hiveparquet)
   * [Extra Long String columns](#string-longer-than-32000-characters)
   * [Compression for Table](#compression-for-table)
-  * [Bad Records Path](#bad-records-path)
+  * [Bad Records Path](#bad-records-path) 
+  * [Load Minimum Input File Size](#load-minimum-data-size) 
+
 * [CREATE TABLE AS SELECT](#create-table-as-select)
 * [CREATE EXTERNAL TABLE](#create-external-table)
   * [External Table on Transactional table location](#create-external-table-on-managed-table-data-location)
@@ -84,6 +86,7 @@ CarbonData DDL statements are documented here,which includes:
 | ------------------------------------------------------------ | ------------------------------------------------------------ |
 | [DICTIONARY_INCLUDE](#dictionary-encoding-configuration)     | Columns for which dictionary needs to be generated           |
 | [NO_INVERTED_INDEX](#inverted-index-configuration)           | Columns to exclude from inverted index generation            |
+| [INVERTED_INDEX](#inverted-index-configuration)              | Columns to include for inverted index generation             |
 | [SORT_COLUMNS](#sort-columns-configuration)                  | Columns to include in sort and its order of sort             |
 | [SORT_SCOPE](#sort-scope-configuration)                      | Sort scope of the load.Options include no sort, local sort ,batch sort and global sort |
 | [TABLE_BLOCKSIZE](#table-block-size-configuration)           | Size of blocks to write onto hdfs                            |
@@ -104,6 +107,7 @@ CarbonData DDL statements are documented here,which includes:
 | [LONG_STRING_COLUMNS](#string-longer-than-32000-characters)  | Columns which are greater than 32K characters                |
 | [BUCKETNUMBER](#bucketing)                                   | Number of buckets to be created                              |
 | [BUCKETCOLUMNS](#bucketing)                                  | Columns which are to be placed in buckets                    |
+| [LOAD_MIN_SIZE_INMB](#load-minimum-data-size)                | Minimum input data size per node for data loading          |
 
  Following are the guidelines for TBLPROPERTIES, CarbonData's additional table options can be set via carbon.properties.
 
@@ -120,11 +124,11 @@ CarbonData DDL statements are documented here,which includes:
 
    - ##### Inverted Index Configuration
 
-     By default inverted index is enabled, it might help to improve compression ratio and query speed, especially for low cardinality columns which are in reward position.
+     By default inverted index is disabled as store size will be reduced, it can be enabled by using a table property. It might help to improve compression ratio and query speed, especially for low cardinality columns which are in reward position.
      Suggested use cases : For high cardinality columns, you can disable the inverted index for improving the data loading performance.
 
      ```
-     TBLPROPERTIES ('NO_INVERTED_INDEX'='column1, column3')
+     TBLPROPERTIES ('NO_INVERTED_INDEX'='column1', 'INVERTED_INDEX'='column2, column3')
      ```
 
    - ##### Sort Columns Configuration
@@ -245,7 +249,8 @@ CarbonData DDL statements are documented here,which includes:
       * TIMESTAMP
       * DATE
       * BOOLEAN
-   
+      * FLOAT
+      * BYTE
    * In case of multi-level complex dataType columns, primitive string/varchar/char columns are considered for local dictionary generation.
 
    System Level Properties for Local Dictionary: 
@@ -445,7 +450,7 @@ CarbonData DDL statements are documented here,which includes:
    - ##### Compression for table
 
      Data compression is also supported by CarbonData.
-     By default, Snappy is used to compress the data. CarbonData also support ZSTD compressor.
+     By default, Snappy is used to compress the data. CarbonData also supports ZSTD compressor.
      User can specify the compressor in the table property:
 
      ```
@@ -474,7 +479,19 @@ CarbonData DDL statements are documented here,which includes:
      be later viewed in table description for reference.
 
      ```
-       TBLPROPERTIES('BAD_RECORD_PATH'='/opt/badrecords'')
+       TBLPROPERTIES('BAD_RECORD_PATH'='/opt/badrecords')
+     ```
+     
+   - ##### Load minimum data size
+     This property indicates the minimum input data size per node for data loading.
+     By default it is not enabled. Setting a non-zero integer value will enable this feature.
+     This property is useful if you have a large cluster and only want a small portion of the nodes to process data loading.
+     For example, if you have a cluster with 10 nodes and the input data is about 1GB. Without this property, each node will process about 100MB input data and result in at least 10 data files. With this property configured with 512, only 2 nodes will be chosen to process the input data, each with about 512MB input and result in about 2 or 4 files based on the compress ratio.
+     Moreover, this property can also be specified in the load option.
+     Notice that once you enable this feature, for load balance, carbondata will ignore the data locality while assigning input data to nodes, this will cause more network traffic.
+
+     ```
+       TBLPROPERTIES('LOAD_MIN_SIZE_INMB'='256')
      ```
 
 ## CREATE TABLE AS SELECT
@@ -540,7 +557,7 @@ CarbonData DDL statements are documented here,which includes:
 
 ### Create external table on Non-Transactional table data location.
   Non-Transactional table data location will have only carbondata and carbonindex files, there will not be a metadata folder (table status and schema).
-  Our SDK module currently support writing data in this format.
+  Our SDK module currently supports writing data in this format.
 
   **Example:**
   ```
@@ -550,7 +567,7 @@ CarbonData DDL statements are documented here,which includes:
   ```
 
   Here writer path will have carbondata and index files.
-  This can be SDK output. Refer [SDK Guide](./sdk-guide.md). 
+  This can be SDK output or C++ SDK output. Refer [SDK Guide](./sdk-guide.md) and [C++ SDK Guide](./csdk-guide.md). 
 
   **Note:**
   1. Dropping of the external table should not delete the files present in the location.

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/dml-of-carbondata.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/dml-of-carbondata.md b/src/site/markdown/dml-of-carbondata.md
index 393ebd3..65654a4 100644
--- a/src/site/markdown/dml-of-carbondata.md
+++ b/src/site/markdown/dml-of-carbondata.md
@@ -58,7 +58,7 @@ CarbonData DML statements are documented here,which includes:
 | [COLUMNDICT](#columndict)                               | Path to read the dictionary data from for particular column  |
 | [DATEFORMAT](#dateformattimestampformat)                | Format of date in the input csv file                         |
 | [TIMESTAMPFORMAT](#dateformattimestampformat)           | Format of timestamp in the input csv file                    |
-| [SORT_COLUMN_BOUNDS](#sort-column-bounds)               | How to parititon the sort columns to make the evenly distributed |
+| [SORT_COLUMN_BOUNDS](#sort-column-bounds)               | How to partition the sort columns to make the evenly distributed |
 | [SINGLE_PASS](#single_pass)                             | When to enable single pass data loading                      |
 | [BAD_RECORDS_LOGGER_ENABLE](#bad-records-handling)      | Whether to enable bad records logging                        |
 | [BAD_RECORD_PATH](#bad-records-handling)                | Bad records logging path. Useful when bad record logging is enabled |
@@ -83,7 +83,7 @@ CarbonData DML statements are documented here,which includes:
     ```
 
   - ##### COMMENTCHAR:
-    Comment Characters can be provided in the load command if user want to comment lines.
+    Comment Characters can be provided in the load command if user wants to comment lines.
     ```
     OPTIONS('COMMENTCHAR'='#')
     ```
@@ -184,7 +184,7 @@ CarbonData DML statements are documented here,which includes:
 
     **NOTE:**
     * SORT_COLUMN_BOUNDS will be used only when the SORT_SCOPE is 'local_sort'.
-    * Carbondata will use these bounds as ranges to process data concurrently during the final sort percedure. The records will be sorted and written out inside each partition. Since the partition is sorted, all records will be sorted.
+    * Carbondata will use these bounds as ranges to process data concurrently during the final sort procedure. The records will be sorted and written out inside each partition. Since the partition is sorted, all records will be sorted.
     * Since the actual order and literal order of the dictionary column are not necessarily the same, we do not recommend you to use this feature if the first sort column is 'dictionary_include'.
     * The option works better if your CPU usage during loading is low. If your current system CPU usage is high, better not to use this option. Besides, it depends on the user to specify the bounds. If user does not know the exactly bounds to make the data distributed evenly among the bounds, loading performance will still be better than before or at least the same as before.
     * Users can find more information about this option in the description of PR1953.

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/documentation.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/documentation.md b/src/site/markdown/documentation.md
index 1b6726a..4a1d176 100644
--- a/src/site/markdown/documentation.md
+++ b/src/site/markdown/documentation.md
@@ -31,7 +31,7 @@ Apache CarbonData is a new big data file format for faster interactive query usi
 
 **CarbonData SQL Language Reference:** CarbonData extends the Spark SQL language and adds several [DDL](./ddl-of-carbondata.md) and [DML](./dml-of-carbondata.md) statements to support operations on it.Refer to the [Reference Manual](./language-manual.md) to understand the supported features and functions.
 
-**Programming Guides:** You can read our guides about [APIs supported](./sdk-guide.md) to learn how to integrate CarbonData with your applications.
+**Programming Guides:** You can read our guides about [Java APIs supported](./sdk-guide.md) or [C++ APIs supported](./csdk-guide.md) to learn how to integrate CarbonData with your applications.
 
 
 

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/faq.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/faq.md b/src/site/markdown/faq.md
index 3ac9a0a..dbcda4f 100644
--- a/src/site/markdown/faq.md
+++ b/src/site/markdown/faq.md
@@ -216,20 +216,18 @@ TimeZone.setDefault(TimeZone.getTimeZone("Asia/Shanghai"))
 ## How to check LRU cache memory footprint?
 To observe the LRU cache memory footprint in the logs, configure the below properties in log4j.properties file.
 ```
-log4j.logger.org.apache.carbondata.core.memory.UnsafeMemoryManager = DEBUG
 log4j.logger.org.apache.carbondata.core.cache.CarbonLRUCache = DEBUG
 ```
-These properties will enable the DEBUG log for the CarbonLRUCache and UnsafeMemoryManager which will print the information of memory consumed using which the LRU cache size can be decided. **Note:** Enabling the DEBUG log will degrade the query performance.
+This property will enable the DEBUG log for the CarbonLRUCache and UnsafeMemoryManager which will print the information of memory consumed using which the LRU cache size can be decided. **Note:** Enabling the DEBUG log will degrade the query performance. Ensure carbon.max.driver.lru.cache.size is configured to observe the current cache size.
 
 **Example:**
 ```
-18/09/26 15:05:28 DEBUG UnsafeMemoryManager: pool-44-thread-1 Memory block (org.apache.carbondata.core.memory.MemoryBlock@21312095) is created with size 10. Total memory used 413Bytes, left 536870499Bytes
 18/09/26 15:05:29 DEBUG CarbonLRUCache: main Required size for entry /home/target/store/default/stored_as_carbondata_table/Fact/Part0/Segment_0/0_1537954529044.carbonindexmerge :: 181 Current cache size :: 0
-18/09/26 15:05:30 DEBUG UnsafeMemoryManager: main Freeing memory of size: 105available memory:  536870836
-18/09/26 15:05:30 DEBUG UnsafeMemoryManager: main Freeing memory of size: 76available memory:  536870912
 18/09/26 15:05:30 INFO CarbonLRUCache: main Removed entry from InMemory lru cache :: /home/target/store/default/stored_as_carbondata_table/Fact/Part0/Segment_0/0_1537954529044.carbonindexmerge
 ```
+**Note:** If  `Removed entry from InMemory LRU cache` are frequently observed in logs, you may have to increase the configured LRU size.
 
+To observe the LRU cache from heap dump, check the heap used by CarbonLRUCache class.
 ## Getting tablestatus.lock issues When loading data
 
   **Symptom**

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/file-structure-of-carbondata.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/file-structure-of-carbondata.md b/src/site/markdown/file-structure-of-carbondata.md
index 8eacd38..7127f37 100644
--- a/src/site/markdown/file-structure-of-carbondata.md
+++ b/src/site/markdown/file-structure-of-carbondata.md
@@ -122,8 +122,7 @@ Compared with V2: The blocklet data volume of V2 format defaults to 120,000 line
 
 #### Footer format
 
-Footer records each carbondata
-All blocklet data distribution information and statistical related metadata information (minmax, startkey/endkey) inside the file.
+Footer records each carbondata, all blocklet data distribution information and statistical related metadata information (minmax, startkey/endkey) inside the file.
 
 ![Footer format](../../src/site/images/2-3_4.png)
 

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/performance-tuning.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/performance-tuning.md b/src/site/markdown/performance-tuning.md
index 6c87ce9..7059605 100644
--- a/src/site/markdown/performance-tuning.md
+++ b/src/site/markdown/performance-tuning.md
@@ -142,7 +142,7 @@
 |-----------|-------------|--------|
 |carbon.number.of.cores.while.loading|Default: 2. This value should be >= 2|Specifies the number of cores used for data processing during data loading in CarbonData. |
 |carbon.sort.size|Default: 100000. The value should be >= 100.|Threshold to write local file in sort step when loading data|
-|carbon.sort.file.write.buffer.size|Default:  50000.|DataOutputStream buffer. |
+|carbon.sort.file.write.buffer.size|Default:  16384.|CarbonData sorts and writes data to intermediate files to limit the memory usage. This configuration determines the buffer size to be used for reading and writing such files. |
 |carbon.merge.sort.reader.thread|Default: 3 |Specifies the number of cores used for temp file merging during data loading in CarbonData.|
 |carbon.merge.sort.prefetch|Default: true | You may want set this value to false if you have not enough memory|
 
@@ -168,10 +168,9 @@
 | carbon.compaction.level.threshold | spark/carbonlib/carbon.properties | Data loading and Querying | For minor compaction, specifies the number of segments to be merged in stage 1 and number of compacted segments to be merged in stage 2. | Each CarbonData load will create one segment, if every load is small in size it will generate many small files over a period of time impacting the query performance. Configuring this parameter will merge the small segment to one big segment which will sort the data and improve the performance. For Example in one telecommunication scenario, the performance improves about 2 times after minor compaction. |
 | spark.sql.shuffle.partitions | spark/conf/spark-defaults.conf | Querying | The number of task started when spark shuffle. | The value can be 1 to 2 times as much as the executor cores. In an aggregation scenario, reducing the number from 200 to 32 reduced the query time from 17 to 9 seconds. |
 | spark.executor.instances/spark.executor.cores/spark.executor.memory | spark/conf/spark-defaults.conf | Querying | The number of executors, CPU cores, and memory used for CarbonData query. | In the bank scenario, we provide the 4 CPUs cores and 15 GB for each executor which can get good performance. This 2 value does not mean more the better. It needs to be configured properly in case of limited resources. For example, In the bank scenario, it has enough CPU 32 cores each node but less memory 64 GB each node. So we cannot give more CPU but less memory. For example, when 4 cores and 12GB for each executor. It sometimes happens GC during the query which impact the query performance very much from the 3 second to more than 15 seconds. In this scenario need to increase the memory or decrease the CPU cores. |
-| carbon.detail.batch.size | spark/carbonlib/carbon.properties | Data loading | The buffer size to store records, returned from the block scan. | In limit scenario this parameter is very important. For example your query limit is 1000. But if we set this value to 3000 that means we get 3000 records from scan but spark will only take 1000 rows. So the 2000 remaining are useless. In one Finance test case after we set it to 100, in the limit 1000 scenario the performance increase about 2 times in comparison to if we set this value to 12000. |
+| carbon.detail.batch.size | spark/carbonlib/carbon.properties | Querying | The buffer size to store records, returned from the block scan. | In limit scenario this parameter is very important. For example your query limit is 1000. But if we set this value to 3000 that means we get 3000 records from scan but spark will only take 1000 rows. So the 2000 remaining are useless. In one Finance test case after we set it to 100, in the limit 1000 scenario the performance increase about 2 times in comparison to if we set this value to 12000. |
 | carbon.use.local.dir | spark/carbonlib/carbon.properties | Data loading | Whether use YARN local directories for multi-table load disk load balance | If this is set it to true CarbonData will use YARN local directories for multi-table load disk load balance, that will improve the data load performance. |
-| carbon.use.multiple.temp.dir | spark/carbonlib/carbon.properties | Data loading | Whether to use multiple YARN local directories during table data loading for disk load balance | After enabling 'carbon.use.local.dir', if this is set to true, CarbonData will use all YARN local directories during data load for disk load balance, that will improve the data load performance. Please enable this property when you encounter disk hotspot problem during data loading. |
-| carbon.sort.temp.compressor | spark/carbonlib/carbon.properties | Data loading | Specify the name of compressor to compress the intermediate sort temporary files during sort procedure in data loading. | The optional values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD', and empty. By default, empty means that Carbondata will not compress the sort temp files. This parameter will be useful if you encounter disk bottleneck. |
+| carbon.sort.temp.compressor | spark/carbonlib/carbon.properties | Data loading | Specify the name of compressor to compress the intermediate sort temporary files during sort procedure in data loading. | The optional values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD', and empty. Specially, empty means that Carbondata will not compress the sort temp files. This parameter will be useful if you encounter disk bottleneck. |
 | carbon.load.skewedDataOptimization.enabled | spark/carbonlib/carbon.properties | Data loading | Whether to enable size based block allocation strategy for data loading. | When loading, carbondata will use file size based block allocation strategy for task distribution. It will make sure that all the executors process the same size of data -- It's useful if the size of your input data files varies widely, say 1MB to 1GB. |
 | carbon.load.min.size.enabled | spark/carbonlib/carbon.properties | Data loading | Whether to enable node minumun input data size allocation strategy for data loading.| When loading, carbondata will use node minumun input data size allocation strategy for task distribution. It will make sure the nodes load the minimum amount of data -- It's useful if the size of your input data files very small, say 1MB to 256MB,Avoid generating a large number of small files. |
 

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/quick-start-guide.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/quick-start-guide.md b/src/site/markdown/quick-start-guide.md
index fd535ae..a14b1cd 100644
--- a/src/site/markdown/quick-start-guide.md
+++ b/src/site/markdown/quick-start-guide.md
@@ -294,7 +294,7 @@ hdfs://<host_name>:port/user/hive/warehouse/carbon.store
 ## Installing and Configuring CarbonData on Presto
 
 **NOTE:** **CarbonData tables cannot be created nor loaded from Presto. User need to create CarbonData Table and load data into it
-either with [Spark](#installing-and-configuring-carbondata-to-run-locally-with-spark-shell) or [SDK](./sdk-guide.md).
+either with [Spark](#installing-and-configuring-carbondata-to-run-locally-with-spark-shell) or [SDK](./sdk-guide.md) or [C++ SDK](./csdk-guide.md).
 Once the table is created,it can be queried from Presto.**
 
 

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/sdk-guide.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/sdk-guide.md b/src/site/markdown/sdk-guide.md
index be42b3f..8abc3b1 100644
--- a/src/site/markdown/sdk-guide.md
+++ b/src/site/markdown/sdk-guide.md
@@ -24,11 +24,15 @@ CarbonData provides SDK to facilitate
 
 # SDK Writer
 
-In the carbon jars package, there exist a carbondata-store-sdk-x.x.x-SNAPSHOT.jar, including SDK writer and reader.
+In the carbon jars package, there exist a carbondata-store-sdk-x.x.x-SNAPSHOT.jar, including SDK writer and reader. 
+If user want to use SDK, except carbondata-store-sdk-x.x.x-SNAPSHOT.jar, 
+it needs carbondata-core-x.x.x-SNAPSHOT.jar, carbondata-common-x.x.x-SNAPSHOT.jar, 
+carbondata-format-x.x.x-SNAPSHOT.jar, carbondata-hadoop-x.x.x-SNAPSHOT.jar and carbondata-processing-x.x.x-SNAPSHOT.jar.
+What's more, user also can use carbondata-sdk.jar directly.
 
 This SDK writer, writes carbondata file and carbonindex file at a given path.
 External client can make use of this writer to convert other format data or live data to create carbondata and index files.
-These SDK writer output contains just a carbondata and carbonindex files. No metadata folder will be present.
+These SDK writer output contains just carbondata and carbonindex files. No metadata folder will be present.
 
 ## Quick example
 
@@ -67,7 +71,7 @@ These SDK writer output contains just a carbondata and carbonindex files. No met
 
      CarbonProperties.getInstance().addProperty("enable.offheap.sort", enableOffheap);
  
-     CarbonWriterBuilder builder = CarbonWriter.builder().outputPath(path).withCsvInput(schema);
+     CarbonWriterBuilder builder = CarbonWriter.builder().outputPath(path).withCsvInput(schema).writtenBy("SDK");
  
      CarbonWriter writer = builder.build();
  
@@ -124,7 +128,7 @@ public class TestSdkAvro {
     try {
       CarbonWriter writer = CarbonWriter.builder()
           .outputPath(path)
-          .withAvroInput(new org.apache.avro.Schema.Parser().parse(avroSchema)).build();
+          .withAvroInput(new org.apache.avro.Schema.Parser().parse(avroSchema)).writtenBy("SDK").build();
 
       for (int i = 0; i < 100; i++) {
         writer.write(record);
@@ -164,7 +168,7 @@ public class TestSdkJson {
 
     Schema CarbonSchema = new Schema(fields);
 
-    CarbonWriterBuilder builder = CarbonWriter.builder().outputPath(path).withJsonInput(CarbonSchema);
+    CarbonWriterBuilder builder = CarbonWriter.builder().outputPath(path).withJsonInput(CarbonSchema).writtenBy("SDK");
 
     // initialize json writer with carbon schema
     CarbonWriter writer = builder.build();
@@ -371,6 +375,8 @@ public CarbonWriterBuilder withLoadOptions(Map<String, String> options);
 * j. sort_scope -- "local_sort", "no_sort", "batch_sort". default value is "local_sort"
 * k. long_string_columns -- comma separated string columns which are more than 32k length. 
 *                           default value is null.
+* l. inverted_index -- comma separated string columns for which inverted index needs to be
+*                      generated
 *
 * @return updated CarbonWriterBuilder
 */
@@ -400,6 +406,17 @@ public CarbonWriterBuilder withHadoopConf(Configuration conf)
 ```
 
 ```
+  /**
+   * Updates the hadoop configuration with the given key value
+   *
+   * @param key   key word
+   * @param value value
+   * @return this object
+   */
+  public CarbonWriterBuilder withHadoopConf(String key, String value);
+```
+
+```
 /**
 * to build a {@link CarbonWriter}, which accepts row in CSV format
 *
@@ -431,6 +448,27 @@ public CarbonWriterBuilder withJsonInput(Schema carbonSchema);
 
 ```
 /**
+* To support writing the ApplicationName which is writing the carbondata file
+* This is a mandatory API to call, else the build() call will fail with error.
+* @param application name which is writing the carbondata files
+* @return CarbonWriterBuilder
+*/
+public CarbonWriterBuilder writtenBy(String appName) {
+```
+
+```
+/**
+* sets the list of columns for which inverted index needs to generated
+* @param invertedIndexColumns is a string array of columns for which inverted index needs to
+* generated.
+* If it is null or an empty array, inverted index will be generated for none of the columns
+* @return updated CarbonWriterBuilder
+*/
+public CarbonWriterBuilder invertedIndexFor(String[] invertedIndexColumns);
+```
+
+```
+/**
 * Build a {@link CarbonWriter}
 * This writer is not thread safe,
 * use withThreadSafe() configuration in multi thread environment
@@ -442,7 +480,25 @@ public CarbonWriterBuilder withJsonInput(Schema carbonSchema);
 public CarbonWriter build() throws IOException, InvalidLoadOptionException;
 ```
 
+```
+ /**
+   * Configure Row Record Reader for reading.
+   *
+   */
+  public CarbonReaderBuilder withRowRecordReader()
+```
+
 ### Class org.apache.carbondata.sdk.file.CarbonWriter
+
+```
+/**
+* Create a {@link CarbonWriterBuilder} to build a {@link CarbonWriter}
+*/
+public static CarbonWriterBuilder builder() {
+    return new CarbonWriterBuilder();
+}
+```
+
 ```
 /**
 * Write an object to the file, the format of the object depends on the implementation
@@ -463,15 +519,6 @@ public abstract void write(Object object) throws IOException;
 public abstract void close() throws IOException;
 ```
 
-```
-/**
-* Create a {@link CarbonWriterBuilder} to build a {@link CarbonWriter}
-*/
-public static CarbonWriterBuilder builder() {
-    return new CarbonWriterBuilder();
-}
-```
-
 ### Class org.apache.carbondata.sdk.file.Field
 ```
 /**
@@ -581,6 +628,26 @@ Find example code at [CarbonReaderExample](https://github.com/apache/carbondata/
 ```
 
 ```
+/**
+  * Breaks the list of CarbonRecordReader in CarbonReader into multiple
+  * CarbonReader objects, each iterating through some 'carbondata' files
+  * and return that list of CarbonReader objects
+  *
+  * If the no. of files is greater than maxSplits, then break the
+  * CarbonReader into maxSplits splits, with each split iterating
+  * through >= 1 file.
+  *
+  * If the no. of files is less than maxSplits, then return list of
+  * CarbonReader with size as the no. of files, with each CarbonReader
+  * iterating through exactly one file
+  *
+  * @param maxSplits: Int
+  * @return list of CarbonReader objects
+  */
+  public List<CarbonReader> split(int maxSplits);
+```
+
+```
   /**
    * Return true if has next row
    */
@@ -596,6 +663,13 @@ Find example code at [CarbonReaderExample](https://github.com/apache/carbondata/
 
 ```
   /**
+   * Read and return next batch row objects
+   */
+  public Object[] readNextBatchRow();
+```
+
+```
+  /**
    * Close reader
    */
   public void close();
@@ -633,6 +707,16 @@ Find example code at [CarbonReaderExample](https://github.com/apache/carbondata/
 ```
 
 ```
+  /**
+   * Sets the batch size of records to read
+   *
+   * @param batch batch size
+   * @return updated CarbonReaderBuilder
+   */
+  public CarbonReaderBuilder withBatch(int batch);
+```
+
+```
 /**
  * To support hadoop configuration
  *
@@ -643,6 +727,17 @@ Find example code at [CarbonReaderExample](https://github.com/apache/carbondata/
 ```
 
 ```
+  /**
+   * Updates the hadoop configuration with the given key value
+   *
+   * @param key   key word
+   * @param value value
+   * @return this object
+   */
+  public CarbonReaderBuilder withHadoopConf(String key, String value);
+```
+  
+```
  /**
    * Build CarbonReader
    *
@@ -662,6 +757,7 @@ Find example code at [CarbonReaderExample](https://github.com/apache/carbondata/
    * @return schema object
    * @throws IOException
    */
+  @Deprecated
   public static Schema readSchemaInSchemaFile(String schemaFilePath);
 ```
 
@@ -672,6 +768,7 @@ Find example code at [CarbonReaderExample](https://github.com/apache/carbondata/
    * @param dataFilePath complete path including carbondata file name
    * @return Schema object
    */
+  @Deprecated
   public static Schema readSchemaInDataFile(String dataFilePath);
 ```
 
@@ -683,9 +780,49 @@ Find example code at [CarbonReaderExample](https://github.com/apache/carbondata/
    * @return schema object
    * @throws IOException
    */
+  @Deprecated
   public static Schema readSchemaInIndexFile(String indexFilePath);
 ```
 
+```
+  /**
+   * read schema from path,
+   * path can be folder path,carbonindex file path, and carbondata file path
+   * and will not check all files schema
+   *
+   * @param path file/folder path
+   * @return schema
+   * @throws IOException
+   */
+  public static Schema readSchema(String path);
+```
+
+```
+  /**
+   * read schema from path,
+   * path can be folder path,carbonindex file path, and carbondata file path
+   * and user can decide whether check all files schema
+   *
+   * @param path             file/folder path
+   * @param validateSchema whether check all files schema
+   * @return schema
+   * @throws IOException
+   */
+  public static Schema readSchema(String path, boolean validateSchema);
+```
+
+```
+  /**
+   * This method return the version details in formatted string by reading from carbondata file
+   * If application name is SDK_1.0.0 and this has written the carbondata file in carbondata 1.6 project version,
+   * then this API returns the String "SDK_1.0.0 in version: 1.6.0-SNAPSHOT"
+   * @param dataFilePath complete path including carbondata file name
+   * @return string with information of who has written this file in which carbondata project version
+   * @throws IOException
+   */
+  public static String getVersionDetails(String dataFilePath);
+```
+
 ### Class org.apache.carbondata.sdk.file.Schema
 ```
   /**

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/streaming-guide.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/streaming-guide.md b/src/site/markdown/streaming-guide.md
index 714b07a..0987ed2 100644
--- a/src/site/markdown/streaming-guide.md
+++ b/src/site/markdown/streaming-guide.md
@@ -31,9 +31,10 @@
 - [StreamSQL](#streamsql)
   - [Defining Streaming Table](#streaming-table)
   - [Streaming Job Management](#streaming-job-management)
-    - [START STREAM](#start-stream)
-    - [STOP STREAM](#stop-stream)
+    - [CREATE STREAM](#create-stream)
+    - [DROP STREAM](#drop-stream)
     - [SHOW STREAMS](#show-streams)
+    - [CLOSE STREAM](#close-stream)
 
 ## Quick example
 Download and unzip spark-2.2.0-bin-hadoop2.7.tgz, and export $SPARK_HOME
@@ -333,7 +334,7 @@ Following example shows how to start a streaming ingest job
 
     sql(
       """
-        |START STREAM job123 ON TABLE sink
+        |CREATE STREAM job123 ON TABLE sink
         |STMPROPERTIES(
         |  'trigger'='ProcessingTime',
         |  'interval'='1 seconds')
@@ -343,7 +344,7 @@ Following example shows how to start a streaming ingest job
         |  WHERE id % 2 = 1
       """.stripMargin)
 
-    sql("STOP STREAM job123")
+    sql("DROP STREAM job123")
 
     sql("SHOW STREAMS [ON TABLE tableName]")
 ```
@@ -360,13 +361,13 @@ These two tables are normal carbon tables, they can be queried independently.
 
 As above example shown:
 
-- `START STREAM jobName ON TABLE tableName` is used to start a streaming ingest job. 
-- `STOP STREAM jobName` is used to stop a streaming job by its name
+- `CREATE STREAM jobName ON TABLE tableName` is used to start a streaming ingest job. 
+- `DROP STREAM jobName` is used to stop a streaming job by its name
 - `SHOW STREAMS [ON TABLE tableName]` is used to print streaming job information
 
 
 
-##### START STREAM
+##### CREATE STREAM
 
 When this is issued, carbon will start a structured streaming job to do the streaming ingestion. Before launching the job, system will validate:
 
@@ -424,11 +425,25 @@ For Kafka data source, create the source table by:
   )
   ```
 
+- Then CREATE STREAM can be used to start the streaming ingest job from source table to sink table
+```
+CREATE STREAM job123 ON TABLE sink
+STMPROPERTIES(
+    'trigger'='ProcessingTime',
+     'interval'='10 seconds'
+) 
+AS
+   SELECT *
+   FROM source
+   WHERE id % 2 = 1
+```
 
-##### STOP STREAM
-
-When this is issued, the streaming job will be stopped immediately. It will fail if the jobName specified is not exist.
+##### DROP STREAM
 
+When `DROP STREAM` is issued, the streaming job will be stopped immediately. It will fail if the jobName specified is not exist.
+```
+DROP STREAM job123
+```
 
 
 ##### SHOW STREAMS
@@ -441,4 +456,9 @@ When this is issued, the streaming job will be stopped immediately. It will fail
 
 `SHOW STREAMS` command will show all stream jobs in the system.
 
+##### ALTER TABLE CLOSE STREAM
+
+When the streaming application is stopped, and user want to manually trigger data conversion from carbon streaming files to columnar files, one can use
+`ALTER TABLE sink COMPACT 'CLOSE_STREAMING';`
+
 

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/usecases.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/usecases.md b/src/site/markdown/usecases.md
index e8b98b5..c029bb3 100644
--- a/src/site/markdown/usecases.md
+++ b/src/site/markdown/usecases.md
@@ -72,7 +72,6 @@ Apart from these, the following CarbonData configuration was suggested to be con
 | Data Loading | table_blocksize                         | 256  | To efficiently schedule multiple tasks during query |
 | Data Loading | carbon.sort.intermediate.files.limit    | 100    | Increased to 100 as number of cores are more.Can perform merging in backgorund.If less number of files to merge, sort threads would be idle |
 | Data Loading | carbon.use.local.dir                    | TRUE   | yarn application directory will be usually on a single disk.YARN would be configured with multiple disks to be used as temp or to assign randomly to applications. Using the yarn temp directory will allow carbon to use multiple disks and improve IO performance |
-| Data Loading | carbon.use.multiple.temp.dir            | TRUE   | multiple disks to write sort files will lead to better IO and reduce the IO bottleneck |
 | Compaction | carbon.compaction.level.threshold       | 6,6    | Since frequent small loads, compacting more segments will give better query results |
 | Compaction | carbon.enable.auto.load.merge           | true   | Since data loading is small,auto compacting keeps the number of segments less and also compaction can complete in  time |
 | Compaction | carbon.number.of.cores.while.compacting | 4      | Higher number of cores can improve the compaction speed |
@@ -127,7 +126,6 @@ Use all columns are no-dictionary as the cardinality is high.
 | Data Loading | table_blocksize                         | 512                     | To efficiently schedule multiple tasks during query. This size depends on data scenario.If data is such that the filters would select less number of blocklets to scan, keeping higher number works well.If the number blocklets to scan is more, better to reduce the size as more tasks can be scheduled in parallel. |
 | Data Loading | carbon.sort.intermediate.files.limit    | 100                     | Increased to 100 as number of cores are more.Can perform merging in backgorund.If less number of files to merge, sort threads would be idle |
 | Data Loading | carbon.use.local.dir                    | TRUE                    | yarn application directory will be usually on a single disk.YARN would be configured with multiple disks to be used as temp or to assign randomly to applications. Using the yarn temp directory will allow carbon to use multiple disks and improve IO performance |
-| Data Loading | carbon.use.multiple.temp.dir            | TRUE                    | multiple disks to write sort files will lead to better IO and reduce the IO bottleneck |
 | Data Loading | sort.inmemory.size.in.mb                | 92160 | Memory allocated to do inmemory sorting. When more memory is available in the node, configuring this will retain more sort blocks in memory so that the merge sort is faster due to no/very less IO |
 | Compaction | carbon.major.compaction.size            | 921600                  | Sum of several loads to combine into single segment |
 | Compaction | carbon.number.of.cores.while.compacting | 12                      | Higher number of cores can improve the compaction speed.Data size is huge.Compaction need to use more threads to speed up the process |


[3/8] carbondata-site git commit: Added 1.5.1 version information

Posted by ra...@apache.org.
http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/ddl-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/ddl-of-carbondata.html b/src/main/webapp/ddl-of-carbondata.html
index 434f378..7f84786 100644
--- a/src/main/webapp/ddl-of-carbondata.html
+++ b/src/main/webapp/ddl-of-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -223,7 +223,7 @@
 <p>CarbonData DDL statements are documented here,which includes:</p>
 <ul>
 <li>
-<a href="#create-table">CREATE TABLE</a>
+<p><a href="#create-table">CREATE TABLE</a></p>
 <ul>
 <li><a href="#dictionary-encoding-configuration">Dictionary Encoding</a></li>
 <li><a href="#inverted-index-configuration">Inverted Index</a></li>
@@ -239,19 +239,24 @@
 <li><a href="#string-longer-than-32000-characters">Extra Long String columns</a></li>
 <li><a href="#compression-for-table">Compression for Table</a></li>
 <li><a href="#bad-records-path">Bad Records Path</a></li>
+<li><a href="#load-minimum-data-size">Load Minimum Input File Size</a></li>
 </ul>
 </li>
-<li><a href="#create-table-as-select">CREATE TABLE AS SELECT</a></li>
 <li>
-<a href="#create-external-table">CREATE EXTERNAL TABLE</a>
+<p><a href="#create-table-as-select">CREATE TABLE AS SELECT</a></p>
+</li>
+<li>
+<p><a href="#create-external-table">CREATE EXTERNAL TABLE</a></p>
 <ul>
 <li><a href="#create-external-table-on-managed-table-data-location">External Table on Transactional table location</a></li>
 <li><a href="#create-external-table-on-non-transactional-table-data-location">External Table on non-transactional table location</a></li>
 </ul>
 </li>
-<li><a href="#create-database">CREATE DATABASE</a></li>
 <li>
-<a href="#table-management">TABLE MANAGEMENT</a>
+<p><a href="#create-database">CREATE DATABASE</a></p>
+</li>
+<li>
+<p><a href="#table-management">TABLE MANAGEMENT</a></p>
 <ul>
 <li><a href="#show-table">SHOW TABLE</a></li>
 <li>
@@ -271,7 +276,7 @@
 </ul>
 </li>
 <li>
-<a href="#partition">PARTITION</a>
+<p><a href="#partition">PARTITION</a></p>
 <ul>
 <li>
 <a href="#standard-partition">STANDARD PARTITION(HIVE)</a>
@@ -293,7 +298,9 @@
 <li><a href="#drop-a-partition">DROP PARTITION</a></li>
 </ul>
 </li>
-<li><a href="#bucketing">BUCKETING</a></li>
+<li>
+<p><a href="#bucketing">BUCKETING</a></p>
+</li>
 </ul>
 <h2>
 <a id="create-table" class="anchor" href="#create-table" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CREATE TABLE</h2>
@@ -324,6 +331,10 @@ STORED AS carbondata
 <td>Columns to exclude from inverted index generation</td>
 </tr>
 <tr>
+<td><a href="#inverted-index-configuration">INVERTED_INDEX</a></td>
+<td>Columns to include for inverted index generation</td>
+</tr>
+<tr>
 <td><a href="#sort-columns-configuration">SORT_COLUMNS</a></td>
 <td>Columns to include in sort and its order of sort</td>
 </tr>
@@ -403,6 +414,10 @@ STORED AS carbondata
 <td><a href="#bucketing">BUCKETCOLUMNS</a></td>
 <td>Columns which are to be placed in buckets</td>
 </tr>
+<tr>
+<td><a href="#load-minimum-data-size">LOAD_MIN_SIZE_INMB</a></td>
+<td>Minimum input data size per node for data loading</td>
+</tr>
 </tbody>
 </table>
 <p>Following are the guidelines for TBLPROPERTIES, CarbonData's additional table options can be set via carbon.properties.</p>
@@ -419,9 +434,9 @@ Suggested use cases : do dictionary encoding for low cardinality columns, it mig
 <li>
 <h5>
 <a id="inverted-index-configuration" class="anchor" href="#inverted-index-configuration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Inverted Index Configuration</h5>
-<p>By default inverted index is enabled, it might help to improve compression ratio and query speed, especially for low cardinality columns which are in reward position.
+<p>By default inverted index is disabled as store size will be reduced, it can be enabled by using a table property. It might help to improve compression ratio and query speed, especially for low cardinality columns which are in reward position.
 Suggested use cases : For high cardinality columns, you can disable the inverted index for improving the data loading performance.</p>
-<pre><code>TBLPROPERTIES ('NO_INVERTED_INDEX'='column1, column3')
+<pre><code>TBLPROPERTIES ('NO_INVERTED_INDEX'='column1', 'INVERTED_INDEX'='column2, column3')
 </code></pre>
 </li>
 <li>
@@ -549,6 +564,8 @@ Following are 5 configurations:</p>
 <li>TIMESTAMP</li>
 <li>DATE</li>
 <li>BOOLEAN</li>
+<li>FLOAT</li>
+<li>BYTE</li>
 </ul>
 </li>
 <li>
@@ -746,7 +763,7 @@ You can refer to SDKwriterTestCase for example.</p>
 <h5>
 <a id="compression-for-table" class="anchor" href="#compression-for-table" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Compression for table</h5>
 <p>Data compression is also supported by CarbonData.
-By default, Snappy is used to compress the data. CarbonData also support ZSTD compressor.
+By default, Snappy is used to compress the data. CarbonData also supports ZSTD compressor.
 User can specify the compressor in the table property:</p>
 <pre><code>TBLPROPERTIES('carbon.column.compressor'='snappy')
 </code></pre>
@@ -770,7 +787,19 @@ The corresponding system property is configured in carbon.properties file as bel
 As the table path remains the same after rename therefore the user can use this property to
 specify bad records path for the table at the time of creation, so that the same path can
 be later viewed in table description for reference.</p>
-<pre><code>  TBLPROPERTIES('BAD_RECORD_PATH'='/opt/badrecords'')
+<pre><code>  TBLPROPERTIES('BAD_RECORD_PATH'='/opt/badrecords')
+</code></pre>
+</li>
+<li>
+<h5>
+<a id="load-minimum-data-size" class="anchor" href="#load-minimum-data-size" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Load minimum data size</h5>
+<p>This property indicates the minimum input data size per node for data loading.
+By default it is not enabled. Setting a non-zero integer value will enable this feature.
+This property is useful if you have a large cluster and only want a small portion of the nodes to process data loading.
+For example, if you have a cluster with 10 nodes and the input data is about 1GB. Without this property, each node will process about 100MB input data and result in at least 10 data files. With this property configured with 512, only 2 nodes will be chosen to process the input data, each with about 512MB input and result in about 2 or 4 files based on the compress ratio.
+Moreover, this property can also be specified in the load option.
+Notice that once you enable this feature, for load balance, carbondata will ignore the data locality while assigning input data to nodes, this will cause more network traffic.</p>
+<pre><code>  TBLPROPERTIES('LOAD_MIN_SIZE_INMB'='256')
 </code></pre>
 </li>
 </ul>
@@ -832,14 +861,14 @@ checkAnswer(sql("SELECT count(*) from source"), sql("SELECT count(*) from origin
 <h3>
 <a id="create-external-table-on-non-transactional-table-data-location" class="anchor" href="#create-external-table-on-non-transactional-table-data-location" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Create external table on Non-Transactional table data location.</h3>
 <p>Non-Transactional table data location will have only carbondata and carbonindex files, there will not be a metadata folder (table status and schema).
-Our SDK module currently support writing data in this format.</p>
+Our SDK module currently supports writing data in this format.</p>
 <p><strong>Example:</strong></p>
 <pre><code>sql(
 s"""CREATE EXTERNAL TABLE sdkOutputTable STORED AS carbondata LOCATION
 |'$writerPath' """.stripMargin)
 </code></pre>
 <p>Here writer path will have carbondata and index files.
-This can be SDK output. Refer <a href="./sdk-guide.html">SDK Guide</a>.</p>
+This can be SDK output or C++ SDK output. Refer <a href="./sdk-guide.html">SDK Guide</a> and <a href="./csdk-guide.html">C++ SDK Guide</a>.</p>
 <p><strong>Note:</strong></p>
 <ol>
 <li>Dropping of the external table should not delete the files present in the location.</li>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/dml-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/dml-of-carbondata.html b/src/main/webapp/dml-of-carbondata.html
index a96578a..15ff807 100644
--- a/src/main/webapp/dml-of-carbondata.html
+++ b/src/main/webapp/dml-of-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -306,7 +306,7 @@ OPTIONS(property_name=property_value, ...)
 </tr>
 <tr>
 <td><a href="#sort-column-bounds">SORT_COLUMN_BOUNDS</a></td>
-<td>How to parititon the sort columns to make the evenly distributed</td>
+<td>How to partition the sort columns to make the evenly distributed</td>
 </tr>
 <tr>
 <td><a href="#single_pass">SINGLE_PASS</a></td>
@@ -353,7 +353,7 @@ OPTIONS(property_name=property_value, ...)
 <li>
 <h5>
 <a id="commentchar" class="anchor" href="#commentchar" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>COMMENTCHAR:</h5>
-<p>Comment Characters can be provided in the load command if user want to comment lines.</p>
+<p>Comment Characters can be provided in the load command if user wants to comment lines.</p>
 <pre><code>OPTIONS('COMMENTCHAR'='#')
 </code></pre>
 </li>
@@ -443,7 +443,7 @@ true: CSV file is with file header.</p>
 <p><strong>NOTE:</strong></p>
 <ul>
 <li>SORT_COLUMN_BOUNDS will be used only when the SORT_SCOPE is 'local_sort'.</li>
-<li>Carbondata will use these bounds as ranges to process data concurrently during the final sort percedure. The records will be sorted and written out inside each partition. Since the partition is sorted, all records will be sorted.</li>
+<li>Carbondata will use these bounds as ranges to process data concurrently during the final sort procedure. The records will be sorted and written out inside each partition. Since the partition is sorted, all records will be sorted.</li>
 <li>Since the actual order and literal order of the dictionary column are not necessarily the same, we do not recommend you to use this feature if the first sort column is 'dictionary_include'.</li>
 <li>The option works better if your CPU usage during loading is low. If your current system CPU usage is high, better not to use this option. Besides, it depends on the user to specify the bounds. If user does not know the exactly bounds to make the data distributed evenly among the bounds, loading performance will still be better than before or at least the same as before.</li>
 <li>Users can find more information about this option in the description of PR1953.</li>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/documentation.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/documentation.html b/src/main/webapp/documentation.html
index aeab9cc..e49cdae 100644
--- a/src/main/webapp/documentation.html
+++ b/src/main/webapp/documentation.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -226,7 +226,7 @@
 <p><strong>File Format Concepts:</strong> Start with the basics of understanding the <a href="./file-structure-of-carbondata.html#carbondata-file-format">CarbonData file format</a> and its <a href="./file-structure-of-carbondata.html">storage structure</a>. This will help to understand other parts of the documentation, including deployment, programming and usage guides.</p>
 <p><strong>Quick Start:</strong> <a href="./quick-start-guide.html#installing-and-configuring-carbondata-to-run-locally-with-spark-shell">Run an example program</a> on your local machine or <a href="https://github.com/apache/carbondata/tree/master/examples/spark2/src/main/scala/org/apache/carbondata/examples" target=_blank>study some examples</a>.</p>
 <p><strong>CarbonData SQL Language Reference:</strong> CarbonData extends the Spark SQL language and adds several <a href="./ddl-of-carbondata.html">DDL</a> and <a href="./dml-of-carbondata.html">DML</a> statements to support operations on it.Refer to the <a href="./language-manual.html">Reference Manual</a> to understand the supported features and functions.</p>
-<p><strong>Programming Guides:</strong> You can read our guides about <a href="./sdk-guide.html">APIs supported</a> to learn how to integrate CarbonData with your applications.</p>
+<p><strong>Programming Guides:</strong> You can read our guides about <a href="./sdk-guide.html">Java APIs supported</a> or <a href="./csdk-guide.html">C++ APIs supported</a> to learn how to integrate CarbonData with your applications.</p>
 <h2>
 <a id="integration" class="anchor" href="#integration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Integration</h2>
 <p>CarbonData can be integrated with popular Execution engines like <a href="./quick-start-guide.html#spark">Spark</a> and <a href="./quick-start-guide.html#presto">Presto</a>.Refer to the <a href="./quick-start-guide.html#integration">Installation and Configuration</a> section to understand all modes of Integrating CarbonData.</p>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/faq.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/faq.html b/src/main/webapp/faq.html
index 2068c17..a42bbb8 100644
--- a/src/main/webapp/faq.html
+++ b/src/main/webapp/faq.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -390,17 +390,15 @@ TimeZone.setDefault(TimeZone.getTimeZone("Asia/Shanghai"))
 <h2>
 <a id="how-to-check-lru-cache-memory-footprint" class="anchor" href="#how-to-check-lru-cache-memory-footprint" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>How to check LRU cache memory footprint?</h2>
 <p>To observe the LRU cache memory footprint in the logs, configure the below properties in log4j.properties file.</p>
-<pre><code>log4j.logger.org.apache.carbondata.core.memory.UnsafeMemoryManager = DEBUG
-log4j.logger.org.apache.carbondata.core.cache.CarbonLRUCache = DEBUG
+<pre><code>log4j.logger.org.apache.carbondata.core.cache.CarbonLRUCache = DEBUG
 </code></pre>
-<p>These properties will enable the DEBUG log for the CarbonLRUCache and UnsafeMemoryManager which will print the information of memory consumed using which the LRU cache size can be decided. <strong>Note:</strong> Enabling the DEBUG log will degrade the query performance.</p>
+<p>This property will enable the DEBUG log for the CarbonLRUCache and UnsafeMemoryManager which will print the information of memory consumed using which the LRU cache size can be decided. <strong>Note:</strong> Enabling the DEBUG log will degrade the query performance. Ensure carbon.max.driver.lru.cache.size is configured to observe the current cache size.</p>
 <p><strong>Example:</strong></p>
-<pre><code>18/09/26 15:05:28 DEBUG UnsafeMemoryManager: pool-44-thread-1 Memory block (org.apache.carbondata.core.memory.MemoryBlock@21312095) is created with size 10. Total memory used 413Bytes, left 536870499Bytes
-18/09/26 15:05:29 DEBUG CarbonLRUCache: main Required size for entry /home/target/store/default/stored_as_carbondata_table/Fact/Part0/Segment_0/0_1537954529044.carbonindexmerge :: 181 Current cache size :: 0
-18/09/26 15:05:30 DEBUG UnsafeMemoryManager: main Freeing memory of size: 105available memory:  536870836
-18/09/26 15:05:30 DEBUG UnsafeMemoryManager: main Freeing memory of size: 76available memory:  536870912
+<pre><code>18/09/26 15:05:29 DEBUG CarbonLRUCache: main Required size for entry /home/target/store/default/stored_as_carbondata_table/Fact/Part0/Segment_0/0_1537954529044.carbonindexmerge :: 181 Current cache size :: 0
 18/09/26 15:05:30 INFO CarbonLRUCache: main Removed entry from InMemory lru cache :: /home/target/store/default/stored_as_carbondata_table/Fact/Part0/Segment_0/0_1537954529044.carbonindexmerge
 </code></pre>
+<p><strong>Note:</strong> If  <code>Removed entry from InMemory LRU cache</code> are frequently observed in logs, you may have to increase the configured LRU size.</p>
+<p>To observe the LRU cache from heap dump, check the heap used by CarbonLRUCache class.</p>
 <h2>
 <a id="getting-tablestatuslock-issues-when-loading-data" class="anchor" href="#getting-tablestatuslock-issues-when-loading-data" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Getting tablestatus.lock issues When loading data</h2>
 <p><strong>Symptom</strong></p>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/file-structure-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/file-structure-of-carbondata.html b/src/main/webapp/file-structure-of-carbondata.html
index baa34db..5230ba3 100644
--- a/src/main/webapp/file-structure-of-carbondata.html
+++ b/src/main/webapp/file-structure-of-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -313,8 +313,7 @@ Several encodings that may be used in CarbonData files.</li>
 <p><a href="../docs/images/2-3_3.png?raw=true" target="_blank" rel="noopener noreferrer"><img src="https://github.com/apache/carbondata/blob/master/docs/images/2-3_3.png?raw=true" alt="V3" style="max-width:100%;"></a></p>
 <h4>
 <a id="footer-format" class="anchor" href="#footer-format" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Footer format</h4>
-<p>Footer records each carbondata
-All blocklet data distribution information and statistical related metadata information (minmax, startkey/endkey) inside the file.</p>
+<p>Footer records each carbondata, all blocklet data distribution information and statistical related metadata information (minmax, startkey/endkey) inside the file.</p>
 <p><a href="../docs/images/2-3_4.png?raw=true" target="_blank" rel="noopener noreferrer"><img src="https://github.com/apache/carbondata/blob/master/docs/images/2-3_4.png?raw=true" alt="Footer format" style="max-width:100%;"></a></p>
 <ol>
 <li>BlockletInfo3 is used to record the offset and length of all ColumnChunk3.</li>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/how-to-contribute-to-apache-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/how-to-contribute-to-apache-carbondata.html b/src/main/webapp/how-to-contribute-to-apache-carbondata.html
index eae12e3..a6dc1ee 100644
--- a/src/main/webapp/how-to-contribute-to-apache-carbondata.html
+++ b/src/main/webapp/how-to-contribute-to-apache-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/index.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/index.html b/src/main/webapp/index.html
index a78233a..6a5967d 100644
--- a/src/main/webapp/index.html
+++ b/src/main/webapp/index.html
@@ -54,6 +54,9 @@
                                 class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -66,9 +69,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/introduction.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/introduction.html b/src/main/webapp/introduction.html
index 51a46c2..0cfa369 100644
--- a/src/main/webapp/introduction.html
+++ b/src/main/webapp/introduction.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/language-manual.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/language-manual.html b/src/main/webapp/language-manual.html
index 9f738b8..a95de91 100644
--- a/src/main/webapp/language-manual.html
+++ b/src/main/webapp/language-manual.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/lucene-datamap-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/lucene-datamap-guide.html b/src/main/webapp/lucene-datamap-guide.html
index f461ca5..ef819a5 100644
--- a/src/main/webapp/lucene-datamap-guide.html
+++ b/src/main/webapp/lucene-datamap-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/performance-tuning.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/performance-tuning.html b/src/main/webapp/performance-tuning.html
index e077e85..e539614 100644
--- a/src/main/webapp/performance-tuning.html
+++ b/src/main/webapp/performance-tuning.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -399,8 +399,8 @@ You can configure CarbonData by tuning following properties in carbon.properties
 </tr>
 <tr>
 <td>carbon.sort.file.write.buffer.size</td>
-<td>Default:  50000.</td>
-<td>DataOutputStream buffer.</td>
+<td>Default:  16384.</td>
+<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. This configuration determines the buffer size to be used for reading and writing such files.</td>
 </tr>
 <tr>
 <td>carbon.merge.sort.reader.thread</td>
@@ -474,7 +474,7 @@ scenarios. After the completion of POC, some of the configurations impacting the
 <tr>
 <td>carbon.detail.batch.size</td>
 <td>spark/carbonlib/carbon.properties</td>
-<td>Data loading</td>
+<td>Querying</td>
 <td>The buffer size to store records, returned from the block scan.</td>
 <td>In limit scenario this parameter is very important. For example your query limit is 1000. But if we set this value to 3000 that means we get 3000 records from scan but spark will only take 1000 rows. So the 2000 remaining are useless. In one Finance test case after we set it to 100, in the limit 1000 scenario the performance increase about 2 times in comparison to if we set this value to 12000.</td>
 </tr>
@@ -486,18 +486,11 @@ scenarios. After the completion of POC, some of the configurations impacting the
 <td>If this is set it to true CarbonData will use YARN local directories for multi-table load disk load balance, that will improve the data load performance.</td>
 </tr>
 <tr>
-<td>carbon.use.multiple.temp.dir</td>
-<td>spark/carbonlib/carbon.properties</td>
-<td>Data loading</td>
-<td>Whether to use multiple YARN local directories during table data loading for disk load balance</td>
-<td>After enabling 'carbon.use.local.dir', if this is set to true, CarbonData will use all YARN local directories during data load for disk load balance, that will improve the data load performance. Please enable this property when you encounter disk hotspot problem during data loading.</td>
-</tr>
-<tr>
 <td>carbon.sort.temp.compressor</td>
 <td>spark/carbonlib/carbon.properties</td>
 <td>Data loading</td>
 <td>Specify the name of compressor to compress the intermediate sort temporary files during sort procedure in data loading.</td>
-<td>The optional values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD', and empty. By default, empty means that Carbondata will not compress the sort temp files. This parameter will be useful if you encounter disk bottleneck.</td>
+<td>The optional values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD', and empty. Specially, empty means that Carbondata will not compress the sort temp files. This parameter will be useful if you encounter disk bottleneck.</td>
 </tr>
 <tr>
 <td>carbon.load.skewedDataOptimization.enabled</td>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/preaggregate-datamap-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/preaggregate-datamap-guide.html b/src/main/webapp/preaggregate-datamap-guide.html
index e4f1c91..5e0d4e3 100644
--- a/src/main/webapp/preaggregate-datamap-guide.html
+++ b/src/main/webapp/preaggregate-datamap-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/quick-start-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/quick-start-guide.html b/src/main/webapp/quick-start-guide.html
index 4703ed3..a2f093d 100644
--- a/src/main/webapp/quick-start-guide.html
+++ b/src/main/webapp/quick-start-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -564,7 +564,7 @@ hdfs://&lt;host_name&gt;:port/user/hive/warehouse/carbon.store
 <h2>
 <a id="installing-and-configuring-carbondata-on-presto" class="anchor" href="#installing-and-configuring-carbondata-on-presto" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Installing and Configuring CarbonData on Presto</h2>
 <p><strong>NOTE:</strong> <strong>CarbonData tables cannot be created nor loaded from Presto. User need to create CarbonData Table and load data into it
-either with <a href="#installing-and-configuring-carbondata-to-run-locally-with-spark-shell">Spark</a> or <a href="./sdk-guide.html">SDK</a>.
+either with <a href="#installing-and-configuring-carbondata-to-run-locally-with-spark-shell">Spark</a> or <a href="./sdk-guide.html">SDK</a> or <a href="./csdk-guide.html">C++ SDK</a>.
 Once the table is created,it can be queried from Presto.</strong></p>
 <h3>
 <a id="installing-presto" class="anchor" href="#installing-presto" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Installing Presto</h3>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/release-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/release-guide.html b/src/main/webapp/release-guide.html
index c40f316..dcdaba3 100644
--- a/src/main/webapp/release-guide.html
+++ b/src/main/webapp/release-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/s3-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/s3-guide.html b/src/main/webapp/s3-guide.html
index 27c1c66..ba25dfb 100644
--- a/src/main/webapp/s3-guide.html
+++ b/src/main/webapp/s3-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/sdk-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/sdk-guide.html b/src/main/webapp/sdk-guide.html
index f33d5f9..37d6b26 100644
--- a/src/main/webapp/sdk-guide.html
+++ b/src/main/webapp/sdk-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -227,10 +227,14 @@
 </ol>
 <h1>
 <a id="sdk-writer" class="anchor" href="#sdk-writer" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>SDK Writer</h1>
-<p>In the carbon jars package, there exist a carbondata-store-sdk-x.x.x-SNAPSHOT.jar, including SDK writer and reader.</p>
+<p>In the carbon jars package, there exist a carbondata-store-sdk-x.x.x-SNAPSHOT.jar, including SDK writer and reader.
+If user want to use SDK, except carbondata-store-sdk-x.x.x-SNAPSHOT.jar,
+it needs carbondata-core-x.x.x-SNAPSHOT.jar, carbondata-common-x.x.x-SNAPSHOT.jar,
+carbondata-format-x.x.x-SNAPSHOT.jar, carbondata-hadoop-x.x.x-SNAPSHOT.jar and carbondata-processing-x.x.x-SNAPSHOT.jar.
+What's more, user also can use carbondata-sdk.jar directly.</p>
 <p>This SDK writer, writes carbondata file and carbonindex file at a given path.
 External client can make use of this writer to convert other format data or live data to create carbondata and index files.
-These SDK writer output contains just a carbondata and carbonindex files. No metadata folder will be present.</p>
+These SDK writer output contains just carbondata and carbonindex files. No metadata folder will be present.</p>
 <h2>
 <a id="quick-example" class="anchor" href="#quick-example" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Quick example</h2>
 <h3>
@@ -267,7 +271,7 @@ These SDK writer output contains just a carbondata and carbonindex files. No met
 
      <span class="pl-smi">CarbonProperties</span><span class="pl-k">.</span>getInstance()<span class="pl-k">.</span>addProperty(<span class="pl-s"><span class="pl-pds">"</span>enable.offheap.sort<span class="pl-pds">"</span></span>, enableOffheap);
  
-     <span class="pl-smi">CarbonWriterBuilder</span> builder <span class="pl-k">=</span> <span class="pl-smi">CarbonWriter</span><span class="pl-k">.</span>builder()<span class="pl-k">.</span>outputPath(path)<span class="pl-k">.</span>withCsvInput(schema);
+     <span class="pl-smi">CarbonWriterBuilder</span> builder <span class="pl-k">=</span> <span class="pl-smi">CarbonWriter</span><span class="pl-k">.</span>builder()<span class="pl-k">.</span>outputPath(path)<span class="pl-k">.</span>withCsvInput(schema)<span class="pl-k">.</span>writtenBy(<span class="pl-s"><span class="pl-pds">"</span>SDK<span class="pl-pds">"</span></span>);
  
      <span class="pl-smi">CarbonWriter</span> writer <span class="pl-k">=</span> builder<span class="pl-k">.</span>build();
  
@@ -322,7 +326,7 @@ These SDK writer output contains just a carbondata and carbonindex files. No met
     <span class="pl-k">try</span> {
       <span class="pl-smi">CarbonWriter</span> writer <span class="pl-k">=</span> <span class="pl-smi">CarbonWriter</span><span class="pl-k">.</span>builder()
           .outputPath(path)
-          .withAvroInput(<span class="pl-k">new</span> <span class="pl-smi">org.apache.avro<span class="pl-k">.</span>Schema</span>.<span class="pl-smi">Parser</span>()<span class="pl-k">.</span>parse(avroSchema))<span class="pl-k">.</span>build();
+          .withAvroInput(<span class="pl-k">new</span> <span class="pl-smi">org.apache.avro<span class="pl-k">.</span>Schema</span>.<span class="pl-smi">Parser</span>()<span class="pl-k">.</span>parse(avroSchema))<span class="pl-k">.</span>writtenBy(<span class="pl-s"><span class="pl-pds">"</span>SDK<span class="pl-pds">"</span></span>)<span class="pl-k">.</span>build();
 
       <span class="pl-k">for</span> (<span class="pl-k">int</span> i <span class="pl-k">=</span> <span class="pl-c1">0</span>; i <span class="pl-k">&lt;</span> <span class="pl-c1">100</span>; i<span class="pl-k">++</span>) {
         writer<span class="pl-k">.</span>write(record);
@@ -360,7 +364,7 @@ These SDK writer output contains just a carbondata and carbonindex files. No met
 
     <span class="pl-smi">Schema</span> <span class="pl-smi">CarbonSchema</span> <span class="pl-k">=</span> <span class="pl-k">new</span> <span class="pl-smi">Schema</span>(fields);
 
-    <span class="pl-smi">CarbonWriterBuilder</span> builder <span class="pl-k">=</span> <span class="pl-smi">CarbonWriter</span><span class="pl-k">.</span>builder()<span class="pl-k">.</span>outputPath(path)<span class="pl-k">.</span>withJsonInput(<span class="pl-smi">CarbonSchema</span>);
+    <span class="pl-smi">CarbonWriterBuilder</span> builder <span class="pl-k">=</span> <span class="pl-smi">CarbonWriter</span><span class="pl-k">.</span>builder()<span class="pl-k">.</span>outputPath(path)<span class="pl-k">.</span>withJsonInput(<span class="pl-smi">CarbonSchema</span>)<span class="pl-k">.</span>writtenBy(<span class="pl-s"><span class="pl-pds">"</span>SDK<span class="pl-pds">"</span></span>);
 
     <span class="pl-c"><span class="pl-c">//</span> initialize json writer with carbon schema</span>
     <span class="pl-smi">CarbonWriter</span> writer <span class="pl-k">=</span> builder<span class="pl-k">.</span>build();
@@ -644,6 +648,8 @@ public CarbonWriterBuilder withLoadOptions(Map&lt;String, String&gt; options);
 * j. sort_scope -- "local_sort", "no_sort", "batch_sort". default value is "local_sort"
 * k. long_string_columns -- comma separated string columns which are more than 32k length. 
 *                           default value is null.
+* l. inverted_index -- comma separated string columns for which inverted index needs to be
+*                      generated
 *
 * @return updated CarbonWriterBuilder
 */
@@ -667,6 +673,15 @@ public CarbonWriterBuilder withThreadSafe(short numOfThreads);
 */
 public CarbonWriterBuilder withHadoopConf(Configuration conf)
 </code></pre>
+<pre><code>  /**
+   * Updates the hadoop configuration with the given key value
+   *
+   * @param key   key word
+   * @param value value
+   * @return this object
+   */
+  public CarbonWriterBuilder withHadoopConf(String key, String value);
+</code></pre>
 <pre><code>/**
 * to build a {@link CarbonWriter}, which accepts row in CSV format
 *
@@ -692,6 +707,23 @@ public CarbonWriterBuilder withAvroInput(org.apache.avro.Schema avroSchema);
 public CarbonWriterBuilder withJsonInput(Schema carbonSchema);
 </code></pre>
 <pre><code>/**
+* To support writing the ApplicationName which is writing the carbondata file
+* This is a mandatory API to call, else the build() call will fail with error.
+* @param application name which is writing the carbondata files
+* @return CarbonWriterBuilder
+*/
+public CarbonWriterBuilder writtenBy(String appName) {
+</code></pre>
+<pre><code>/**
+* sets the list of columns for which inverted index needs to generated
+* @param invertedIndexColumns is a string array of columns for which inverted index needs to
+* generated.
+* If it is null or an empty array, inverted index will be generated for none of the columns
+* @return updated CarbonWriterBuilder
+*/
+public CarbonWriterBuilder invertedIndexFor(String[] invertedIndexColumns);
+</code></pre>
+<pre><code>/**
 * Build a {@link CarbonWriter}
 * This writer is not thread safe,
 * use withThreadSafe() configuration in multi thread environment
@@ -702,9 +734,22 @@ public CarbonWriterBuilder withJsonInput(Schema carbonSchema);
 */
 public CarbonWriter build() throws IOException, InvalidLoadOptionException;
 </code></pre>
+<pre><code> /**
+   * Configure Row Record Reader for reading.
+   *
+   */
+  public CarbonReaderBuilder withRowRecordReader()
+</code></pre>
 <h3>
 <a id="class-orgapachecarbondatasdkfilecarbonwriter" class="anchor" href="#class-orgapachecarbondatasdkfilecarbonwriter" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Class org.apache.carbondata.sdk.file.CarbonWriter</h3>
 <pre><code>/**
+* Create a {@link CarbonWriterBuilder} to build a {@link CarbonWriter}
+*/
+public static CarbonWriterBuilder builder() {
+    return new CarbonWriterBuilder();
+}
+</code></pre>
+<pre><code>/**
 * Write an object to the file, the format of the object depends on the implementation
 * If AvroCarbonWriter, object is of type org.apache.avro.generic.GenericData.Record, 
 *                      which is one row of data.
@@ -720,13 +765,6 @@ public abstract void write(Object object) throws IOException;
 */
 public abstract void close() throws IOException;
 </code></pre>
-<pre><code>/**
-* Create a {@link CarbonWriterBuilder} to build a {@link CarbonWriter}
-*/
-public static CarbonWriterBuilder builder() {
-    return new CarbonWriterBuilder();
-}
-</code></pre>
 <h3>
 <a id="class-orgapachecarbondatasdkfilefield" class="anchor" href="#class-orgapachecarbondatasdkfilefield" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Class org.apache.carbondata.sdk.file.Field</h3>
 <pre><code>/**
@@ -824,6 +862,24 @@ External client can make use of this reader to read CarbonData files without Car
    */
   public static CarbonReaderBuilder builder(String tablePath);
 </code></pre>
+<pre><code>/**
+  * Breaks the list of CarbonRecordReader in CarbonReader into multiple
+  * CarbonReader objects, each iterating through some 'carbondata' files
+  * and return that list of CarbonReader objects
+  *
+  * If the no. of files is greater than maxSplits, then break the
+  * CarbonReader into maxSplits splits, with each split iterating
+  * through &gt;= 1 file.
+  *
+  * If the no. of files is less than maxSplits, then return list of
+  * CarbonReader with size as the no. of files, with each CarbonReader
+  * iterating through exactly one file
+  *
+  * @param maxSplits: Int
+  * @return list of CarbonReader objects
+  */
+  public List&lt;CarbonReader&gt; split(int maxSplits);
+</code></pre>
 <pre><code>  /**
    * Return true if has next row
    */
@@ -835,6 +891,11 @@ External client can make use of this reader to read CarbonData files without Car
   public T readNextRow();
 </code></pre>
 <pre><code>  /**
+   * Read and return next batch row objects
+   */
+  public Object[] readNextBatchRow();
+</code></pre>
+<pre><code>  /**
    * Close reader
    */
   public void close();
@@ -865,6 +926,14 @@ External client can make use of this reader to read CarbonData files without Car
   */
   public CarbonReaderBuilder filter(Expression filterExpression);
 </code></pre>
+<pre><code>  /**
+   * Sets the batch size of records to read
+   *
+   * @param batch batch size
+   * @return updated CarbonReaderBuilder
+   */
+  public CarbonReaderBuilder withBatch(int batch);
+</code></pre>
 <pre><code>/**
  * To support hadoop configuration
  *
@@ -873,6 +942,15 @@ External client can make use of this reader to read CarbonData files without Car
  */
  public CarbonReaderBuilder withHadoopConf(Configuration conf);
 </code></pre>
+<pre><code>  /**
+   * Updates the hadoop configuration with the given key value
+   *
+   * @param key   key word
+   * @param value value
+   * @return this object
+   */
+  public CarbonReaderBuilder withHadoopConf(String key, String value);
+</code></pre>
 <pre><code> /**
    * Build CarbonReader
    *
@@ -892,6 +970,7 @@ External client can make use of this reader to read CarbonData files without Car
    * @return schema object
    * @throws IOException
    */
+  @Deprecated
   public static Schema readSchemaInSchemaFile(String schemaFilePath);
 </code></pre>
 <pre><code>  /**
@@ -900,6 +979,7 @@ External client can make use of this reader to read CarbonData files without Car
    * @param dataFilePath complete path including carbondata file name
    * @return Schema object
    */
+  @Deprecated
   public static Schema readSchemaInDataFile(String dataFilePath);
 </code></pre>
 <pre><code>  /**
@@ -909,8 +989,42 @@ External client can make use of this reader to read CarbonData files without Car
    * @return schema object
    * @throws IOException
    */
+  @Deprecated
   public static Schema readSchemaInIndexFile(String indexFilePath);
 </code></pre>
+<pre><code>  /**
+   * read schema from path,
+   * path can be folder path,carbonindex file path, and carbondata file path
+   * and will not check all files schema
+   *
+   * @param path file/folder path
+   * @return schema
+   * @throws IOException
+   */
+  public static Schema readSchema(String path);
+</code></pre>
+<pre><code>  /**
+   * read schema from path,
+   * path can be folder path,carbonindex file path, and carbondata file path
+   * and user can decide whether check all files schema
+   *
+   * @param path             file/folder path
+   * @param validateSchema whether check all files schema
+   * @return schema
+   * @throws IOException
+   */
+  public static Schema readSchema(String path, boolean validateSchema);
+</code></pre>
+<pre><code>  /**
+   * This method return the version details in formatted string by reading from carbondata file
+   * If application name is SDK_1.0.0 and this has written the carbondata file in carbondata 1.6 project version,
+   * then this API returns the String "SDK_1.0.0 in version: 1.6.0-SNAPSHOT"
+   * @param dataFilePath complete path including carbondata file name
+   * @return string with information of who has written this file in which carbondata project version
+   * @throws IOException
+   */
+  public static String getVersionDetails(String dataFilePath);
+</code></pre>
 <h3>
 <a id="class-orgapachecarbondatasdkfileschema-1" class="anchor" href="#class-orgapachecarbondatasdkfileschema-1" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Class org.apache.carbondata.sdk.file.Schema</h3>
 <pre><code>  /**

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/security.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/security.html b/src/main/webapp/security.html
index ce1bc30..75a2f65 100644
--- a/src/main/webapp/security.html
+++ b/src/main/webapp/security.html
@@ -45,6 +45,9 @@
                            aria-expanded="false">Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/segment-management-on-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/segment-management-on-carbondata.html b/src/main/webapp/segment-management-on-carbondata.html
index 2f04025..dae0d0e 100644
--- a/src/main/webapp/segment-management-on-carbondata.html
+++ b/src/main/webapp/segment-management-on-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/streaming-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/streaming-guide.html b/src/main/webapp/streaming-guide.html
index 7ea58ce..8d8cb82 100644
--- a/src/main/webapp/streaming-guide.html
+++ b/src/main/webapp/streaming-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -243,9 +243,10 @@
 <li>
 <a href="#streaming-job-management">Streaming Job Management</a>
 <ul>
-<li><a href="#start-stream">START STREAM</a></li>
-<li><a href="#stop-stream">STOP STREAM</a></li>
+<li><a href="#create-stream">CREATE STREAM</a></li>
+<li><a href="#drop-stream">DROP STREAM</a></li>
 <li><a href="#show-streams">SHOW STREAMS</a></li>
+<li><a href="#close-stream">CLOSE STREAM</a></li>
 </ul>
 </li>
 </ul>
@@ -570,7 +571,7 @@ streaming table using following DDL.</p>
 
     sql(
       """
-        |START STREAM job123 ON TABLE sink
+        |CREATE STREAM job123 ON TABLE sink
         |STMPROPERTIES(
         |  'trigger'='ProcessingTime',
         |  'interval'='1 seconds')
@@ -580,7 +581,7 @@ streaming table using following DDL.</p>
         |  WHERE id % 2 = 1
       """.stripMargin)
 
-    sql("STOP STREAM job123")
+    sql("DROP STREAM job123")
 
     sql("SHOW STREAMS [ON TABLE tableName]")
 </code></pre>
@@ -591,14 +592,14 @@ streaming table using following DDL.</p>
 <p>As above example shown:</p>
 <ul>
 <li>
-<code>START STREAM jobName ON TABLE tableName</code> is used to start a streaming ingest job.</li>
+<code>CREATE STREAM jobName ON TABLE tableName</code> is used to start a streaming ingest job.</li>
 <li>
-<code>STOP STREAM jobName</code> is used to stop a streaming job by its name</li>
+<code>DROP STREAM jobName</code> is used to stop a streaming job by its name</li>
 <li>
 <code>SHOW STREAMS [ON TABLE tableName]</code> is used to print streaming job information</li>
 </ul>
 <h5>
-<a id="start-stream" class="anchor" href="#start-stream" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>START STREAM</h5>
+<a id="create-stream" class="anchor" href="#create-stream" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CREATE STREAM</h5>
 <p>When this is issued, carbon will start a structured streaming job to do the streaming ingestion. Before launching the job, system will validate:</p>
 <ul>
 <li>
@@ -651,9 +652,24 @@ TBLPROPERTIES(
  <span class="pl-s"><span class="pl-pds">'</span>record_format<span class="pl-pds">'</span></span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">'</span>csv<span class="pl-pds">'</span></span>, <span class="pl-k">//</span> can be csv <span class="pl-k">or</span> json, default is csv
  <span class="pl-s"><span class="pl-pds">'</span>delimiter<span class="pl-pds">'</span></span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">'</span>|<span class="pl-pds">'</span></span>
 )</pre></div>
+<ul>
+<li>Then CREATE STREAM can be used to start the streaming ingest job from source table to sink table</li>
+</ul>
+<pre><code>CREATE STREAM job123 ON TABLE sink
+STMPROPERTIES(
+    'trigger'='ProcessingTime',
+     'interval'='10 seconds'
+) 
+AS
+   SELECT *
+   FROM source
+   WHERE id % 2 = 1
+</code></pre>
 <h5>
-<a id="stop-stream" class="anchor" href="#stop-stream" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>STOP STREAM</h5>
-<p>When this is issued, the streaming job will be stopped immediately. It will fail if the jobName specified is not exist.</p>
+<a id="drop-stream" class="anchor" href="#drop-stream" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>DROP STREAM</h5>
+<p>When <code>DROP STREAM</code> is issued, the streaming job will be stopped immediately. It will fail if the jobName specified is not exist.</p>
+<pre><code>DROP STREAM job123
+</code></pre>
 <h5>
 <a id="show-streams" class="anchor" href="#show-streams" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>SHOW STREAMS</h5>
 <p><code>SHOW STREAMS ON TABLE tableName</code> command will print the streaming job information as following</p>
@@ -680,6 +696,10 @@ TBLPROPERTIES(
 </tbody>
 </table>
 <p><code>SHOW STREAMS</code> command will show all stream jobs in the system.</p>
+<h5>
+<a id="alter-table-close-stream" class="anchor" href="#alter-table-close-stream" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>ALTER TABLE CLOSE STREAM</h5>
+<p>When the streaming application is stopped, and user want to manually trigger data conversion from carbon streaming files to columnar files, one can use
+<code>ALTER TABLE sink COMPACT 'CLOSE_STREAMING';</code></p>
 <script>
 $(function() {
   // Show selected style on nav item

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/supported-data-types-in-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/supported-data-types-in-carbondata.html b/src/main/webapp/supported-data-types-in-carbondata.html
index 4f584da..d873fab 100644
--- a/src/main/webapp/supported-data-types-in-carbondata.html
+++ b/src/main/webapp/supported-data-types-in-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/timeseries-datamap-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/timeseries-datamap-guide.html b/src/main/webapp/timeseries-datamap-guide.html
index a9137d0..9550fa1 100644
--- a/src/main/webapp/timeseries-datamap-guide.html
+++ b/src/main/webapp/timeseries-datamap-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/usecases.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/usecases.html b/src/main/webapp/usecases.html
index 7e5e07c..b6b4d74 100644
--- a/src/main/webapp/usecases.html
+++ b/src/main/webapp/usecases.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -330,12 +330,6 @@
 <td>yarn application directory will be usually on a single disk.YARN would be configured with multiple disks to be used as temp or to assign randomly to applications. Using the yarn temp directory will allow carbon to use multiple disks and improve IO performance</td>
 </tr>
 <tr>
-<td>Data Loading</td>
-<td>carbon.use.multiple.temp.dir</td>
-<td>TRUE</td>
-<td>multiple disks to write sort files will lead to better IO and reduce the IO bottleneck</td>
-</tr>
-<tr>
 <td>Compaction</td>
 <td>carbon.compaction.level.threshold</td>
 <td>6,6</td>
@@ -468,12 +462,6 @@
 </tr>
 <tr>
 <td>Data Loading</td>
-<td>carbon.use.multiple.temp.dir</td>
-<td>TRUE</td>
-<td>multiple disks to write sort files will lead to better IO and reduce the IO bottleneck</td>
-</tr>
-<tr>
-<td>Data Loading</td>
 <td>sort.inmemory.size.in.mb</td>
 <td>92160</td>
 <td>Memory allocated to do inmemory sorting. When more memory is available in the node, configuring this will retain more sort blocks in memory so that the merge sort is faster due to no/very less IO</td>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/videogallery.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/videogallery.html b/src/main/webapp/videogallery.html
index d6b9dbe..7b7e66c 100644
--- a/src/main/webapp/videogallery.html
+++ b/src/main/webapp/videogallery.html
@@ -49,6 +49,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -61,9 +64,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/CSDK-guide.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/CSDK-guide.md b/src/site/markdown/CSDK-guide.md
index c4f4a31..6002cf5 100644
--- a/src/site/markdown/CSDK-guide.md
+++ b/src/site/markdown/CSDK-guide.md
@@ -15,123 +15,33 @@
     limitations under the License.
 -->
 
-# CSDK Guide
+# C++ SDK Guide
 
-CarbonData CSDK provides C++ interface to write and read carbon file. 
-CSDK use JNI to invoke java SDK in C++ code.
+CarbonData C++ SDK provides C++ interface to write and read carbon file. 
+C++ SDK use JNI to invoke java SDK in C++ code.
 
 
-# CSDK Reader
-This CSDK reader reads CarbonData file and carbonindex file at a given path.
+# C++ SDK Reader
+This C++ SDK reader reads CarbonData file and carbonindex file at a given path.
 External client can make use of this reader to read CarbonData files in C++ 
 code and without CarbonSession.
 
 
 In the carbon jars package, there exist a carbondata-sdk.jar, 
-including SDK reader for CSDK.
+including SDK reader for C++ SDK.
 ## Quick example
-```
-// 1. init JVM
-JavaVM *jvm;
-JNIEnv *initJVM() {
-    JNIEnv *env;
-    JavaVMInitArgs vm_args;
-    int parNum = 3;
-    int res;
-    JavaVMOption options[parNum];
-
-    options[0].optionString = "-Djava.compiler=NONE";
-    options[1].optionString = "-Djava.class.path=../../sdk/target/carbondata-sdk.jar";
-    options[2].optionString = "-verbose:jni";
-    vm_args.version = JNI_VERSION_1_8;
-    vm_args.nOptions = parNum;
-    vm_args.options = options;
-    vm_args.ignoreUnrecognized = JNI_FALSE;
-
-    res = JNI_CreateJavaVM(&jvm, (void **) &env, &vm_args);
-    if (res < 0) {
-        fprintf(stderr, "\nCan't create Java VM\n");
-        exit(1);
-    }
-
-    return env;
-}
-
-// 2. create carbon reader and read data 
-// 2.1 read data from local disk
-/**
- * test read data from local disk, without projection
- *
- * @param env  jni env
- * @return
- */
-bool readFromLocalWithoutProjection(JNIEnv *env) {
-
-    CarbonReader carbonReaderClass;
-    carbonReaderClass.builder(env, "../resources/carbondata", "test");
-    carbonReaderClass.build();
-
-    while (carbonReaderClass.hasNext()) {
-        jobjectArray row = carbonReaderClass.readNextRow();
-        jsize length = env->GetArrayLength(row);
-        int j = 0;
-        for (j = 0; j < length; j++) {
-            jobject element = env->GetObjectArrayElement(row, j);
-            char *str = (char *) env->GetStringUTFChars((jstring) element, JNI_FALSE);
-            printf("%s\t", str);
-        }
-        printf("\n");
-    }
-    carbonReaderClass.close();
-}
-
-// 2.2 read data from S3
-
-/**
- * read data from S3
- * parameter is ak sk endpoint
- *
- * @param env jni env
- * @param argv argument vector
- * @return
- */
-bool readFromS3(JNIEnv *env, char *argv[]) {
-    CarbonReader reader;
-
-    char *args[3];
-    // "your access key"
-    args[0] = argv[1];
-    // "your secret key"
-    args[1] = argv[2];
-    // "your endPoint"
-    args[2] = argv[3];
-
-    reader.builder(env, "s3a://sdk/WriterOutput", "test");
-    reader.withHadoopConf(3, args);
-    reader.build();
-    printf("\nRead data from S3:\n");
-    while (reader.hasNext()) {
-        jobjectArray row = reader.readNextRow();
-        jsize length = env->GetArrayLength(row);
-
-        int j = 0;
-        for (j = 0; j < length; j++) {
-            jobject element = env->GetObjectArrayElement(row, j);
-            char *str = (char *) env->GetStringUTFChars((jstring) element, JNI_FALSE);
-            printf("%s\t", str);
-        }
-        printf("\n");
-    }
-
-    reader.close();
-}
-
-// 3. destory JVM
-    (jvm)->DestroyJavaVM();
-```
-Find example code at main.cpp of CSDK module
+
+Please find example code at  [main.cpp](https://github.com/apache/carbondata/blob/master/store/CSDK/test/main.cpp) of CSDK module  
+
+When users use C++ to read carbon files, users should init JVM first. Then users create 
+carbon reader and read data.There are some example code of read data from local disk  
+and read data from S3 at main.cpp of CSDK module.  Finally, users need to 
+release the memory and destroy JVM.
+
+C++ SDK support read batch row. User can set batch by using withBatch(int batch) before build, and read batch by using readNextBatchRow().
 
 ## API List
+### CarbonReader
 ```
     /**
      * create a CarbonReaderBuilder object for building carbonReader,
@@ -143,7 +53,20 @@ Find example code at main.cpp of CSDK module
      * @return CarbonReaderBuilder object
      */
     jobject builder(JNIEnv *env, char *path, char *tableName);
+```
+
+```
+    /**
+     * create a CarbonReaderBuilder object for building carbonReader,
+     * CarbonReaderBuilder object  can configure different parameter
+     *
+     * @param env JNIEnv
+     * @param path data store path
+     * */
+    void builder(JNIEnv *env, char *path);
+```
 
+```
     /**
      * Configure the projection column names of carbon reader
      *
@@ -152,7 +75,9 @@ Find example code at main.cpp of CSDK module
      * @return CarbonReaderBuilder object
      */
     jobject projection(int argc, char *argv[]);
+```
 
+```
     /**
      *  build carbon reader with argument vector
      *  it support multiple parameter
@@ -164,7 +89,26 @@ Find example code at main.cpp of CSDK module
      * @return CarbonReaderBuilder object
      **/
     jobject withHadoopConf(int argc, char *argv[]);
+```
+
+```
+   /**
+     * Sets the batch size of records to read
+     *
+     * @param batch batch size
+     * @return CarbonReaderBuilder object
+     */
+    void withBatch(int batch);
+```
+
+```
+    /**
+     * Configure Row Record Reader for reading.
+     */
+    void withRowRecordReader();
+```
 
+```
     /**
      * build carbonReader object for reading data
      * it support read data from load disk
@@ -172,26 +116,262 @@ Find example code at main.cpp of CSDK module
      * @return carbonReader object
      */
     jobject build();
+```
 
+```
     /**
      * Whether it has next row data
      *
      * @return boolean value, if it has next row, return true. if it hasn't next row, return false.
      */
     jboolean hasNext();
+```
+
+```
+    /**
+     * read next carbonRow from data
+     * @return carbonRow object of one row
+     */
+     jobject readNextRow();
+```
 
+```
     /**
-     * read next row from data
+     * read Next Batch Row
      *
-     * @return object array of one row
+     * @return rows
      */
-    jobjectArray readNextRow();
+    jobjectArray readNextBatchRow();
+```
 
+```
     /**
      * close the carbon reader
      *
      * @return  boolean value
      */
     jboolean close();
+```
+
+# C++ SDK Writer
+This C++ SDK writer writes CarbonData file and carbonindex file at a given path. 
+External client can make use of this writer to write CarbonData files in C++ 
+code and without CarbonSession. C++ SDK already supports S3 and local disk.
+
+In the carbon jars package, there exist a carbondata-sdk.jar, 
+including SDK writer for C++ SDK. 
+
+## Quick example
+Please find example code at  [main.cpp](https://github.com/apache/carbondata/blob/master/store/CSDK/test/main.cpp) of CSDK module  
+
+When users use C++ to write carbon files, users should init JVM first. Then users create 
+carbon writer and write data.There are some example code of write data to local disk  
+and write data to S3 at main.cpp of CSDK module.  Finally, users need to 
+release the memory and destroy JVM.
+
+## API List
+### CarbonWriter
+```
+    /**
+     * create a CarbonWriterBuilder object for building carbonWriter,
+     * CarbonWriterBuilder object  can configure different parameter
+     *
+     * @param env JNIEnv
+     * @return CarbonWriterBuilder object
+     */
+    void builder(JNIEnv *env);
+```
+
+```
+    /**
+     * Sets the output path of the writer builder
+     *
+     * @param path is the absolute path where output files are written
+     * This method must be called when building CarbonWriterBuilder
+     * @return updated CarbonWriterBuilder
+     */
+    void outputPath(char *path);
+```
+
+```
+    /**
+     * configure the schema with json style schema
+     *
+     * @param jsonSchema json style schema
+     * @return updated CarbonWriterBuilder
+     */
+    void withCsvInput(char *jsonSchema);
+```
+
+```
+    /**
+    * Updates the hadoop configuration with the given key value
+    *
+    * @param key key word
+    * @param value value
+    * @return CarbonWriterBuilder object
+    */
+    void withHadoopConf(char *key, char *value);
+```
+
+```
+    /**
+     * @param appName appName which is writing the carbondata files
+     */
+    void writtenBy(char *appName);
+```
+
+```
+    /**
+     * build carbonWriter object for writing data
+     * it support write data from load disk
+     *
+     * @return carbonWriter object
+     */
+    void build();
+```
+
+```
+    /**
+     * Write an object to the file, the format of the object depends on the
+     * implementation.
+     * Note: This API is not thread safe
+     */
+    void write(jobject obj);
+```
+
+```
+    /**
+     * close the carbon Writer
+     */
+    void close();
+```
+
+### CarbonSchemaReader
+
+```
+    /**
+     * constructor with jni env
+     *
+     * @param env  jni env
+     */
+    CarbonSchemaReader(JNIEnv *env);
+```
+
+```
+    /**
+     * read schema from path,
+     * path can be folder path, carbonindex file path, and carbondata file path
+     * and will not check all files schema
+     *
+     * @param path file/folder path
+     * @return schema
+     */
+    jobject readSchema(char *path);
+```
+
+```
+    /**
+     *  read schema from path,
+     *  path can be folder path, carbonindex file path, and carbondata file path
+     *  and user can decide whether check all files schema
+     *
+     * @param path carbon data path
+     * @param validateSchema whether check all files schema
+     * @return schema
+     */
+    jobject readSchema(char *path, bool validateSchema);
+```
+
+### Schema
+``` 
+ /**
+     * constructor with jni env and carbon schema data
+     *
+     * @param env jni env
+     * @param schema  carbon schema data
+     */
+    Schema(JNIEnv *env, jobject schema);
+```
+
+```
+    /**
+     * get fields length of schema
+     *
+     * @return fields length
+     */
+    int getFieldsLength();
+```
+
+```
+    /**
+     * get field name by ordinal
+     *
+     * @param ordinal the data index of carbon schema
+     * @return ordinal field name
+     */
+    char *getFieldName(int ordinal);
+```
+
+```
+    /**
+     * get  field data type name by ordinal
+     *
+     * @param ordinal the data index of carbon schema
+     * @return ordinal field data type name
+     */
+    char *getFieldDataTypeName(int ordinal);
+```
+
+```
+    /**
+     * get  array child element data type name by ordinal
+     *
+     * @param ordinal the data index of carbon schema
+     * @return ordinal array child element data type name
+     */
+    char *getArrayElementTypeName(int ordinal);
+```
 
+### CarbonProperties
+```
+  /**
+     * Constructor of CarbonProperties
+     *
+     * @param env JNI env
+     */
+    CarbonProperties(JNIEnv *env);
+```
+
+```
+    /**
+     * This method will be used to add a new property
+     * 
+     * @param key property key
+     * @param value property value
+     * @return CarbonProperties object
+     */
+    jobject addProperty(char *key, char *value);
+```
+
+```
+    /**
+     * This method will be used to get the properties value
+     *
+     * @param key  property key
+     * @return  property value
+     */
+    char *getProperty(char *key);
+```
+
+```
+    /**
+     * This method will be used to get the properties value
+     * if property is not present then it will return the default value
+     *
+     * @param key  property key
+     * @param defaultValue  property default Value
+     * @return
+     */
+    char *getProperty(char *key, char *defaultValue);
 ```


[2/8] carbondata-site git commit: Added 1.5.1 version information

Posted by ra...@apache.org.
http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/site/markdown/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/configuration-parameters.md b/src/site/markdown/configuration-parameters.md
index 0a4565a..4aa2929 100644
--- a/src/site/markdown/configuration-parameters.md
+++ b/src/site/markdown/configuration-parameters.md
@@ -16,7 +16,7 @@
 -->
 
 # Configuring CarbonData
- This guide explains the configurations that can be used to tune CarbonData to achieve better performance.Most of the properties that control the internal settings have reasonable default values. They are listed along with the properties along with explanation.
+ This guide explains the configurations that can be used to tune CarbonData to achieve better performance. Most of the properties that control the internal settings have reasonable default values. They are listed along with the properties along with explanation.
 
  * [System Configuration](#system-configuration)
  * [Data Loading Configuration](#data-loading-configuration)
@@ -31,68 +31,66 @@ This section provides the details of all the configurations required for the Car
 
 | Property | Default Value | Description |
 |----------------------------|-------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| carbon.storelocation | spark.sql.warehouse.dir property value | Location where CarbonData will create the store, and write the data in its custom format. If not specified,the path defaults to spark.sql.warehouse.dir property. **NOTE:** Store location should be in HDFS. |
+| carbon.storelocation | spark.sql.warehouse.dir property value | Location where CarbonData will create the store, and write the data in its custom format. If not specified,the path defaults to spark.sql.warehouse.dir property. **NOTE:** Store location should be in HDFS or S3. |
 | carbon.ddl.base.hdfs.url | (none) | To simplify and shorten the path to be specified in DDL/DML commands, this property is supported. This property is used to configure the HDFS relative path, the path configured in carbon.ddl.base.hdfs.url will be appended to the HDFS path configured in fs.defaultFS of core-site.xml. If this path is configured, then user need not pass the complete path while dataload. For example: If absolute path of the csv file is hdfs://10.18.101.155:54310/data/cnbc/2016/xyz.csv, the path "hdfs://10.18.101.155:54310" will come from property fs.defaultFS and user can configure the /data/cnbc/ as carbon.ddl.base.hdfs.url. Now while dataload user can specify the csv path as /2016/xyz.csv. |
 | carbon.badRecords.location | (none) | CarbonData can detect the records not conforming to defined table schema and isolate them as bad records. This property is used to specify where to store such bad records. |
 | carbon.streaming.auto.handoff.enabled | true | CarbonData supports storing of streaming data. To have high throughput for streaming, the data is written in Row format which is highly optimized for write, but performs poorly for query. When this property is true and when the streaming data size reaches ***carbon.streaming.segment.max.size***, CabonData will automatically convert the data to columnar format and optimize it for faster querying.**NOTE:** It is not recommended to keep the default value which is true. |
 | carbon.streaming.segment.max.size | 1024000000 | CarbonData writes streaming data in row format which is optimized for high write throughput. This property defines the maximum size of data to be held is row format, beyond which it will be converted to columnar format in order to support high performance query, provided ***carbon.streaming.auto.handoff.enabled*** is true. **NOTE:** Setting higher value will impact the streaming ingestion. The value has to be configured in bytes. |
-| carbon.query.show.datamaps | true | CarbonData stores datamaps as independent tables so as to allow independent maintenance to some extent. When this property is true,which is by default, show tables command will list all the tables including datatmaps(eg: Preaggregate table), else datamaps will be excluded from the table list.**NOTE:**  It is generally not required for the user to do any maintenance operations on these tables and hence not required to be seen.But it is shown by default so that user or admin can get clear understanding of the system for capacity planning. |
-| carbon.segment.lock.files.preserve.hours | 48 | In order to support parallel data loading onto the same table, CarbonData sequences(locks) at the granularity of segments.Operations affecting the segment(like IUD, alter) are blocked from parallel operations. This property value indicates the number of hours the segment lock files will be preserved after dataload. These lock files will be deleted with the clean command after the configured number of hours. |
-| carbon.timestamp.format | yyyy-MM-dd HH:mm:ss | CarbonData can understand data of timestamp type and process it in special manner.It can be so that the format of Timestamp data is different from that understood by CarbonData by default. This configuration allows users to specify the format of Timestamp in their data. |
+| carbon.query.show.datamaps | true | CarbonData stores datamaps as independent tables so as to allow independent maintenance to some extent. When this property is true,which is by default, show tables command will list all the tables including datatmaps(eg: Preaggregate table), else datamaps will be excluded from the table list.**NOTE:**  It is generally not required for the user to do any maintenance operations on these tables and hence not required to be seen. But it is shown by default so that user or admin can get clear understanding of the system for capacity planning. |
+| carbon.segment.lock.files.preserve.hours | 48 | In order to support parallel data loading onto the same table, CarbonData sequences(locks) at the granularity of segments. Operations affecting the segment(like IUD, alter) are blocked from parallel operations. This property value indicates the number of hours the segment lock files will be preserved after dataload. These lock files will be deleted with the clean command after the configured number of hours. |
+| carbon.timestamp.format | yyyy-MM-dd HH:mm:ss | CarbonData can understand data of timestamp type and process it in special manner. It can be so that the format of Timestamp data is different from that understood by CarbonData by default. This configuration allows users to specify the format of Timestamp in their data. |
 | carbon.lock.type | LOCALLOCK | This configuration specifies the type of lock to be acquired during concurrent operations on table. There are following types of lock implementation: - LOCALLOCK: Lock is created on local file system as file. This lock is useful when only one spark driver (thrift server) runs on a machine and no other CarbonData spark application is launched concurrently. - HDFSLOCK: Lock is created on HDFS file system as file. This lock is useful when multiple CarbonData spark applications are launched and no ZooKeeper is running on cluster and HDFS supports file based locking. |
 | carbon.lock.path | TABLEPATH | This configuration specifies the path where lock files have to be created. Recommended to configure zookeeper lock type or configure HDFS lock path(to this property) in case of S3 file system as locking is not feasible on S3. |
-| carbon.unsafe.working.memory.in.mb | 512 | CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. The Minimum value recommeded is 512MB. Any value below this is reset to default value of 512MB. **NOTE:** The below formulas explain how to arrive at the off-heap size required.<u>Memory Required For Data Loading:</u>(*carbon.number.of.cores.while.loading*) * (Number of tables to load in parallel) * (*offheap.sort.chunk.size.inmb* + *carbon.blockletgroup.size.in.mb* + *carbon.blockletgroup.size.in.mb*/3.5 ). <u>Memory required for Query:</u>SPARK_EXECUTOR_INSTANCES * (*carbon.blockletgroup.size.in.mb* + *carbon.blockletgroup.size.in.mb* * 3.5) * spark.executor.cores |
-| carbon.unsafe.driver.working.memory.in.mb | 60% of JVM Heap Memory | CarbonData supports storing data in unsafe on-heap memory in driver for certain operations like insert into, query for loading datamap cache. The Minimum value recommended is 512MB. |
+| enable.offheap.sort | true | Whether carbondata will use offheap or onheap memory. By default, the value is true and carbondata will use the property value from *carbon.unsafe.working.memory.in.mb* or *carbon.unsafe.driver.working.memory.in.mb* as the amount of memory; if it is false, carbondata will use the minimum value between the configured amount of unsafe memory and the 60% of JVM Heap Memory as the amount of memory. |
+| carbon.unsafe.working.memory.in.mb | 512 | CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. The Minimum value recommeded is 512MB. Any value below this is reset to default value of 512MB. **NOTE:** The below formulas explain how to arrive at the off-heap size required.<u>Memory Required For Data Loading per executor: </u>(*carbon.number.of.cores.while.loading*) * (Number of tables to load in parallel) * (*offheap.sort.chunk.size.inmb* + *carbon.blockletgroup.size.in.mb* + *carbon.blockletgroup.size.in.mb*/3.5 ). <u>Memory required for Query per executor:</u> (*carbon.blockletgroup.size.in.mb* + *carbon.blockletgroup.size.in.mb* * 3.5) * spark.executor.cores |
+| carbon.unsafe.driver.working.memory.in.mb | (none) | CarbonData supports storing data in unsafe on-heap memory in driver for certain operations like insert into, query for loading datamap cache. The Minimum value recommended is 512MB. If this configuration is not set, carbondata will use the value of `carbon.unsafe.working.memory.in.mb`. |
 | carbon.update.sync.folder | /tmp/carbondata | CarbonData maintains last modification time entries in modifiedTime.mdt to determine the schema changes and reload only when necessary. This configuration specifies the path where the file needs to be written. |
-| carbon.invisible.segments.preserve.count | 200 | CarbonData maintains each data load entry in tablestatus file. The entries from this file are not deleted for those segments that are compacted or dropped, but are made invisible. If the number of data loads are very high, the size and number of entries in tablestatus file can become too many causing unnecessary reading of all data. This configuration specifies the number of segment entries to be maintained afte they are compacted or dropped.Beyond this, the entries are moved to a separate history tablestatus file. **NOTE:** The entries in tablestatus file help to identify the operations performed on CarbonData table and is also used for checkpointing during various data manupulation operations. This is similar to AUDIT file maintaining all the operations and its status.Hence the entries are never deleted but moved to a separate history file. |
-| carbon.lock.retries | 3 | CarbonData ensures consistency of operations by blocking certain operations from running in parallel. In order to block the operations from running in parallel, lock is obtained on the table. This configuration specifies the maximum number of retries to obtain the lock for any operations other than load. **NOTE:** Data manupulation operations like Compaction,UPDATE,DELETE  or LOADING,UPDATE,DELETE are not allowed to run in parallel.How ever data loading can happen in parallel to compaction. |
+| carbon.invisible.segments.preserve.count | 200 | CarbonData maintains each data load entry in tablestatus file. The entries from this file are not deleted for those segments that are compacted or dropped, but are made invisible. If the number of data loads are very high, the size and number of entries in tablestatus file can become too many causing unnecessary reading of all data. This configuration specifies the number of segment entries to be maintained afte they are compacted or dropped. Beyond this, the entries are moved to a separate history tablestatus file. **NOTE:** The entries in tablestatus file help to identify the operations performed on CarbonData table and is also used for checkpointing during various data manupulation operations. This is similar to AUDIT file maintaining all the operations and its status. Hence the entries are never deleted but moved to a separate history file. |
+| carbon.lock.retries | 3 | CarbonData ensures consistency of operations by blocking certain operations from running in parallel. In order to block the operations from running in parallel, lock is obtained on the table. This configuration specifies the maximum number of retries to obtain the lock for any operations other than load. **NOTE:** Data manupulation operations like Compaction,UPDATE,DELETE  or LOADING,UPDATE,DELETE are not allowed to run in parallel. How ever data loading can happen in parallel to compaction. |
 | carbon.lock.retry.timeout.sec | 5 | Specifies the interval between the retries to obtain the lock for any operation other than load. **NOTE:** Refer to ***carbon.lock.retries*** for understanding why CarbonData uses locks for operations. |
 
 ## Data Loading Configuration
 
 | Parameter | Default Value | Description |
 |--------------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------|
-| carbon.number.of.cores.while.loading | 2 | Number of cores to be used while loading data. This also determines the number of threads to be used to read the input files (csv) in parallel.**NOTE:** This configured value is used in every data loading step to parallelize the operations. Configuring a higher value can lead to increased early thread pre-emption by OS and there by reduce the overall performance. |
-| carbon.sort.size | 100000 | Number of records to hold in memory to sort and write intermediate temp files.**NOTE:** Memory required for data loading increases with increase in configured value as each thread would cache configured number of records. |
-| carbon.global.sort.rdd.storage.level | MEMORY_ONLY | Storage level to persist dataset of RDD/dataframe when loading data with 'sort_scope'='global_sort', if user's executor has less memory, set this parameter to 'MEMORY_AND_DISK_SER' or other storage level to correspond to different environment. [See detail](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence). |
-| carbon.load.global.sort.partitions | 0 | The Number of partitions to use when shuffling data for sort. Default value 0 means to use same number of map tasks as reduce tasks.**NOTE:** In general, it is recommended to have 2-3 tasks per CPU core in your cluster. |
-| carbon.options.bad.records.logger.enable | false | CarbonData can identify the records that are not conformant to schema and isolate them as bad records. Enabling this configuration will make CarbonData to log such bad records.**NOTE:** If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in ***carbon.bad.records.action***. |
-| carbon.bad.records.action | FAIL | CarbonData in addition to identifying the bad records, can take certain actions on such data. This configuration can have four types of actions for bad records namely FORCE, REDIRECT, IGNORE and FAIL. If set to FORCE then it auto-corrects the data by storing the bad records as NULL. If set to REDIRECT then bad records are written to the raw CSV instead of being loaded. If set to IGNORE then bad records are neither loaded nor written to the raw CSV. If set to FAIL then data loading fails if any bad records are found. |
-| carbon.options.is.empty.data.bad.record | false | Based on the business scenarios, empty("" or '' or ,,) data can be valid or invalid. This configuration controls how empty data should be treated by CarbonData. If false, then empty ("" or '' or ,,) data will not be considered as bad record and vice versa. |
-| carbon.options.bad.record.path | (none) | Specifies the HDFS path where bad records are to be stored. By default the value is Null. This path must to be configured by the user if ***carbon.options.bad.records.logger.enable*** is **true** or ***carbon.bad.records.action*** is **REDIRECT**. |
-| carbon.blockletgroup.size.in.mb | 64 | Please refer to [file-structure-of-carbondata](./file-structure-of-carbondata.md#carbondata-file-format) to understand the storage format of CarbonData. The data are read as a group of blocklets which are called blocklet groups. This parameter specifies the size of each blocklet group. Higher value results in better sequential IO access. The minimum value is 16MB, any value lesser than 16MB will reset to the default value (64MB).**NOTE:** Configuring a higher value might lead to poor performance as an entire blocklet group will have to read into memory before processing.For filter queries with limit, it is **not advisable** to have a bigger blocklet size. For Aggregation queries which need to return more number of rows,bigger blocklet size is advisable. |
-| carbon.sort.file.write.buffer.size | 16384 | CarbonData sorts and writes data to intermediate files to limit the memory usage. This configuration determines the buffer size to be used for reading and writing such files. **NOTE:** This configuration is useful to tune IO and derive optimal performance.Based on the OS and underlying harddisk type, these values can significantly affect the overall performance.It is ideal to tune the buffersize equivalent to the IO buffer size of the OS.Recommended range is between 10240 to 10485760 bytes. |
-| carbon.sort.intermediate.files.limit | 20 | CarbonData sorts and writes data to intermediate files to limit the memory usage. Before writing the target carbondat file, the data in these intermediate files needs to be sorted again so as to ensure the entire data in the data load is sorted. This configuration determines the minimum number of intermediate files after which merged sort is applied on them sort the data.**NOTE:** Intermediate merging happens on a separate thread in the background.Number of threads used is determined by ***carbon.merge.sort.reader.thread***.Configuring a low value will cause more time to be spent in merging these intermediate merged files which can cause more IO.Configuring a high value would cause not to use the idle threads to do intermediate sort merges.Range of recommended values are between 2 and 50 |
-| carbon.csv.read.buffersize.byte | 1048576 | CarbonData uses Hadoop InputFormat to read the csv files. This configuration value is used to pass buffer size as input for the Hadoop MR job when reading the csv files. This value is configured in bytes.**NOTE:** Refer to ***org.apache.hadoop.mapreduce.InputFormat*** documentation for additional information. |
-| carbon.merge.sort.reader.thread | 3 | CarbonData sorts and writes data to intermediate files to limit the memory usage. When the intermediate files reaches ***carbon.sort.intermediate.files.limit*** the files will be merged,the number of threads specified in this configuration will be used to read the intermediate files for performing merge sort.**NOTE:** Refer to ***carbon.sort.intermediate.files.limit*** for operation description.Configuring less  number of threads can cause merging to slow down over loading process where as configuring more number of threads can cause thread contention with threads in other data loading steps.Hence configure a fraction of ***carbon.number.of.cores.while.loading***. |
 | carbon.concurrent.lock.retries | 100 | CarbonData supports concurrent data loading onto same table. To ensure the loading status is correctly updated into the system,locks are used to sequence the status updation step. This configuration specifies the maximum number of retries to obtain the lock for updating the load status. **NOTE:** This value is high as more number of concurrent loading happens,more the chances of not able to obtain the lock when tried. Adjust this value according to the number of concurrent loading to be supported by the system. |
 | carbon.concurrent.lock.retry.timeout.sec | 1 | Specifies the interval between the retries to obtain the lock for concurrent operations. **NOTE:** Refer to ***carbon.concurrent.lock.retries*** for understanding why CarbonData uses locks during data loading operations. |
-| carbon.skip.empty.line | false | The csv files givent to CarbonData for loading can contain empty lines. Based on the business scenario, this empty line might have to be ignored or needs to be treated as NULL value for all columns.In order to define this business behavior, this configuration is provided.**NOTE:** In order to consider NULL values for non string columns and continue with data load, ***carbon.bad.records.action*** need to be set to **FORCE**;else data load will be failed as bad records encountered. |
-| carbon.enable.calculate.size | true | **For Load Operation**: Setting this property calculates the size of the carbon data file (.carbondata) and carbon index file (.carbonindex) for every load and updates the table status file. **For Describe Formatted**: Setting this property calculates the total size of the carbon data files and carbon index files for the respective table and displays in describe formatted command. **NOTE:** This is useful to determine the overall size of the carbondata table and also get an idea of how the table is growing in order to take up other backup strategy decisions. |
-| carbon.cutOffTimestamp | (none) | CarbonData has capability to generate the Dictionary values for the timestamp columns from the data itself without the need to store the computed dictionary values. This configuration sets the start date for calculating the timestamp. Java counts the number of milliseconds from start of "1970-01-01 00:00:00". This property is used to customize the start of position. For example "2000-01-01 00:00:00". **NOTE:** The date must be in the form ***carbon.timestamp.format***. CarbonData supports storing data for upto 68 years.For example, if the cut-off time is 1970-01-01 05:30:00, then data upto 2038-01-01 05:30:00 will be supported by CarbonData. |
-| carbon.timegranularity | SECOND | The configuration is used to specify the data granularity level such as DAY, HOUR, MINUTE, or SECOND. This helps to store more than 68 years of data into CarbonData. |
-| carbon.use.local.dir | false | CarbonData,during data loading, writes files to local temp directories before copying the files to HDFS. This configuration is used to specify whether CarbonData can write locally to tmp directory of the container or to the YARN application directory. |
-| carbon.use.multiple.temp.dir | false | When multiple disks are present in the system, YARN is generally configured with multiple disks to be used as temp directories for managing the containers. This configuration specifies whether to use multiple YARN local directories during data loading for disk IO load balancing.Enable ***carbon.use.local.dir*** for this configuration to take effect. **NOTE:** Data Loading is an IO intensive operation whose performance can be limited by the disk IO threshold, particularly during multi table concurrent data load.Configuring this parameter, balances the disk IO across multiple disks there by improving the over all load performance. |
-| carbon.sort.temp.compressor | (none) | CarbonData writes every ***carbon.sort.size*** number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These temporary files can be compressed and written in order to save the storage space. This configuration specifies the name of compressor to be used to compress the intermediate sort temp files during sort procedure in data loading. The valid values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD' and empty. By default, empty means that Carbondata will not compress the sort temp files. **NOTE:** Compressor will be useful if you encounter disk bottleneck.Since the data needs to be compressed and decompressed,it involves additional CPU cycles,but is compensated by the high IO throughput due to less data to be written or read from the disks. |
-| carbon.load.skewedDataOptimization.enabled | false | During data loading,CarbonData would divide the number of blocks equally so as to ensure all executors process same number of blocks. This mechanism satisfies most of the scenarios and ensures maximum parallel processing for optimal data loading performance.In some business scenarios, there might be scenarios where the size of blocks vary significantly and hence some executors would have to do more work if they get blocks containing more data. This configuration enables size based block allocation strategy for data loading. When loading, carbondata will use file size based block allocation strategy for task distribution. It will make sure that all the executors process the same size of data.**NOTE:** This configuration is useful if the size of your input data files varies widely, say 1MB to 1GB.For this configuration to work effectively,knowing the data pattern and size is important and necessary. |
-| carbon.load.min.size.enabled | false | During Data Loading, CarbonData would divide the number of files among the available executors to parallelize the loading operation. When the input data files are very small, this action causes to generate many small carbondata files. This configuration determines whether to enable node minumun input data size allocation strategy for data loading.It will make sure that the node load the minimum amount of data there by reducing number of carbondata files.**NOTE:** This configuration is useful if the size of the input data files are very small, like 1MB to 256MB.Refer to ***load_min_size_inmb*** to configure the minimum size to be considered for splitting files among executors. |
-| enable.data.loading.statistics | false | CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made ***true*** would log additional data loading statistics information to more accurately locate the issues being debugged. **NOTE:** Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time.It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all data loading performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging. |
-| carbon.dictionary.chunk.size | 10000 | CarbonData generates dictionary keys and writes them to separate dictionary file during data loading. To optimize the IO, this configuration determines the number of dictionary keys to be persisted to dictionary file at a time. **NOTE:** Writing to file also serves as a commit point to the dictionary generated.Increasing more values in memory causes more data loss during system or application failure.It is advised to alter this configuration judiciously. |
-| dictionary.worker.threads | 1 | CarbonData supports Optimized data loading by relying on a dictionary server. Dictionary server helps to maintain dictionary values independent of the data loading and there by avoids reading the same input data multiples times. This configuration determines the number of concurrent dictionary generation or request that needs to be served by the dictionary server. **NOTE:** This configuration takes effect when ***carbon.options.single.pass*** is configured as true.Please refer to *carbon.options.single.pass*to understand how dictionary server optimizes data loading. |
+| carbon.csv.read.buffersize.byte | 1048576 | CarbonData uses Hadoop InputFormat to read the csv files. This configuration value is used to pass buffer size as input for the Hadoop MR job when reading the csv files. This value is configured in bytes. **NOTE:** Refer to ***org.apache.hadoop.mapreduce. InputFormat*** documentation for additional information. |
+| carbon.loading.prefetch | false | CarbonData uses univocity parser to read csv files. This configuration is used to inform the parser whether it can prefetch the data from csv files to speed up the reading.**NOTE:** Enabling prefetch improves the data loading performance, but needs higher memory to keep more records which are read ahead from disk. |
+| carbon.skip.empty.line | false | The csv files givent to CarbonData for loading can contain empty lines. Based on the business scenario, this empty line might have to be ignored or needs to be treated as NULL value for all columns. In order to define this business behavior, this configuration is provided.**NOTE:** In order to consider NULL values for non string columns and continue with data load, ***carbon.bad.records.action*** need to be set to **FORCE**;else data load will be failed as bad records encountered. |
+| carbon.number.of.cores.while.loading | 2 | Number of cores to be used while loading data. This also determines the number of threads to be used to read the input files (csv) in parallel.**NOTE:** This configured value is used in every data loading step to parallelize the operations. Configuring a higher value can lead to increased early thread pre-emption by OS and there by reduce the overall performance. |
 | enable.unsafe.sort | true | CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. This configuration enables to use unsafe functions in CarbonData. **NOTE:** For operations like data loading, which generates more short lived Java objects, Java GC can be a bottle neck. Using unsafe can overcome the GC overhead and improve the overall performance. |
 | enable.offheap.sort | true | CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. This configuration enables using off-heap memory for sorting of data during data loading.**NOTE:**  ***enable.unsafe.sort*** configuration needs to be configured to true for using off-heap |
-| enable.inmemory.merge.sort | false | CarbonData sorts and writes data to intermediate files to limit the memory usage. These intermediate files needs to be sorted again using merge sort before writing to the final carbondata file.Performing merge sort in memory would increase the sorting performance at the cost of increased memory footprint. This Configuration specifies to do in-memory merge sort or to do file based merge sort. |
-| carbon.load.sort.scope | LOCAL_SORT | CarbonData can support various sorting options to match the balance between load and query performance. LOCAL_SORT:All the data given to an executor in the single load is fully sorted and written to carbondata files. Data loading performance is reduced a little as the entire data needs to be sorted in the executor. BATCH_SORT:Sorts the data in batches of configured size and writes to carbondata files. Data loading performance increases as the entire data need not be sorted.But query performance will get reduced due to false positives in block pruning and also due to more number of carbondata files written.Due to more number of carbondata files, if identified blocks > cluster parallelism, query performance and concurrency will get reduced.GLOBAL SORT:Entire data in the data load is fully sorted and written to carbondata files. Data loading performance would get reduced as the entire data needs to be sorted.But the query performance increases si
 gnificantly due to very less false positives and concurrency is also improved. **NOTE:** when BATCH_SORT is configured, it is recommended to keep ***carbon.load.batch.sort.size.inmb*** > ***carbon.blockletgroup.size.in.mb*** |
+| carbon.load.sort.scope | LOCAL_SORT | CarbonData can support various sorting options to match the balance between load and query performance. LOCAL_SORT:All the data given to an executor in the single load is fully sorted and written to carbondata files. Data loading performance is reduced a little as the entire data needs to be sorted in the executor. BATCH_SORT:Sorts the data in batches of configured size and writes to carbondata files. Data loading performance increases as the entire data need not be sorted. But query performance will get reduced due to false positives in block pruning and also due to more number of carbondata files written. Due to more number of carbondata files, if identified blocks > cluster parallelism, query performance and concurrency will get reduced. GLOBAL SORT:Entire data in the data load is fully sorted and written to carbondata files. Data loading performance would get reduced as the entire data needs to be sorted. But the query performance increase
 s significantly due to very less false positives and concurrency is also improved. **NOTE:** when BATCH_SORT is configured, it is recommended to keep ***carbon.load.batch.sort.size.inmb*** > ***carbon.blockletgroup.size.in.mb*** |
 | carbon.load.batch.sort.size.inmb | 0 | When  ***carbon.load.sort.scope*** is configured as ***BATCH_SORT***, this configuration needs to be added to specify the batch size for sorting and writing to carbondata files. **NOTE:** It is recommended to keep the value around 45% of ***carbon.sort.storage.inmemory.size.inmb*** to avoid spill to disk. Also it is recommended to keep the value higher than ***carbon.blockletgroup.size.in.mb***. Refer to *carbon.load.sort.scope* for more information on sort options and the advantages/disadvantages of each option. |
-| carbon.dictionary.server.port | 2030 | Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.Single pass loading can be enabled using the option ***carbon.options.single.pass***. When this option is specified, a dictionary server will be internally started to handle the dictionary generation and query requests. This configuration specifies the port on which the server need to listen for incoming requests.Port value ranges between 0-65535 |
+| carbon.global.sort.rdd.storage.level | MEMORY_ONLY | Storage level to persist dataset of RDD/dataframe when loading data with 'sort_scope'='global_sort', if user's executor has less memory, set this parameter to 'MEMORY_AND_DISK_SER' or other storage level to correspond to different environment. [See detail](http://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence). |
+| carbon.load.global.sort.partitions | 0 | The number of partitions to use when shuffling data for global sort. Default value 0 means to use same number of map tasks as reduce tasks. **NOTE:** In general, it is recommended to have 2-3 tasks per CPU core in your cluster. |
+| carbon.sort.size | 100000 | Number of records to hold in memory to sort and write intermediate sort temp files. **NOTE:** Memory required for data loading will increase if you turn this value bigger. Besides each thread will cache this amout of records. The number of threads is configured by *carbon.number.of.cores.while.loading*. |
+| carbon.options.bad.records.logger.enable | false | CarbonData can identify the records that are not conformant to schema and isolate them as bad records. Enabling this configuration will make CarbonData to log such bad records. **NOTE:** If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in ***carbon.bad.records.action***. |
+| carbon.bad.records.action | FAIL | CarbonData in addition to identifying the bad records, can take certain actions on such data. This configuration can have four types of actions for bad records namely FORCE, REDIRECT, IGNORE and FAIL. If set to FORCE then it auto-corrects the data by storing the bad records as NULL. If set to REDIRECT then bad records are written to the raw CSV instead of being loaded. If set to IGNORE then bad records are neither loaded nor written to the raw CSV. If set to FAIL then data loading fails if any bad records are found. |
+| carbon.options.is.empty.data.bad.record | false | Based on the business scenarios, empty("" or '' or ,,) data can be valid or invalid. This configuration controls how empty data should be treated by CarbonData. If false, then empty ("" or '' or ,,) data will not be considered as bad record and vice versa. |
+| carbon.options.bad.record.path | (none) | Specifies the HDFS path where bad records are to be stored. By default the value is Null. This path must be configured by the user if ***carbon.options.bad.records.logger.enable*** is **true** or ***carbon.bad.records.action*** is **REDIRECT**. |
+| carbon.blockletgroup.size.in.mb | 64 | Please refer to [file-structure-of-carbondata](./file-structure-of-carbondata.md#carbondata-file-format) to understand the storage format of CarbonData. The data are read as a group of blocklets which are called blocklet groups. This parameter specifies the size of each blocklet group. Higher value results in better sequential IO access. The minimum value is 16MB, any value lesser than 16MB will reset to the default value (64MB). **NOTE:** Configuring a higher value might lead to poor performance as an entire blocklet group will have to read into memory before processing. For filter queries with limit, it is **not advisable** to have a bigger blocklet size. For aggregation queries which need to return more number of rows, bigger blocklet size is advisable. |
+| carbon.sort.file.write.buffer.size | 16384 | CarbonData sorts and writes data to intermediate files to limit the memory usage. This configuration determines the buffer size to be used for reading and writing such files. **NOTE:** This configuration is useful to tune IO and derive optimal performance. Based on the OS and underlying harddisk type, these values can significantly affect the overall performance. It is ideal to tune the buffer size equivalent to the IO buffer size of the OS. Recommended range is between 10240 and 10485760 bytes. |
+| carbon.sort.intermediate.files.limit | 20 | CarbonData sorts and writes data to intermediate files to limit the memory usage. Before writing the target carbondata file, the records in these intermediate files needs to be merged to reduce the number of intermediate files. This configuration determines the minimum number of intermediate files after which merged sort is applied on them sort the data. **NOTE:** Intermediate merging happens on a separate thread in the background. Number of threads used is determined by ***carbon.merge.sort.reader.thread***. Configuring a low value will cause more time to be spent in merging these intermediate merged files which can cause more IO. Configuring a high value would cause not to use the idle threads to do intermediate sort merges. Recommended range is between 2 and 50. |
+| carbon.merge.sort.reader.thread | 3 | CarbonData sorts and writes data to intermediate files to limit the memory usage. When the intermediate files reaches ***carbon.sort.intermediate.files.limit***, the files will be merged in another thread pool. This value will control the size of the pool. Each thread will read the intermediate files and do merge sort and finally write the records to another file. **NOTE:** Refer to ***carbon.sort.intermediate.files.limit*** for operation description. Configuring smaller number of threads can cause merging slow down over loading process whereas configuring larger number of threads can cause thread contention with threads in other data loading steps. Hence configure a fraction of ***carbon.number.of.cores.while.loading***. |
 | carbon.merge.sort.prefetch | true | CarbonData writes every ***carbon.sort.size*** number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These intermediate temp files will have to be sorted using merge sort before writing into CarbonData format. This configuration enables pre fetching of data from these temp files in order to optimize IO and speed up data loading process. |
-| carbon.loading.prefetch | false | CarbonData uses univocity parser to read csv files. This configuration is used to inform the parser whether it can prefetch the data from csv files to speed up the reading.**NOTE:** Enabling prefetch improves the data loading performance, but needs higher memory to keep more records which are read ahead from disk. |
 | carbon.prefetch.buffersize | 1000 | When the configuration ***carbon.merge.sort.prefetch*** is configured to true, we need to set the number of records that can be prefetched. This configuration is used specify the number of records to be prefetched.**NOTE: **Configuring more number of records to be prefetched increases memory footprint as more records will have to be kept in memory. |
-| load_min_size_inmb | 256 | This configuration is used along with ***carbon.load.min.size.enabled***. This determines the minimum size of input files to be considered for distribution among executors while data loading.**NOTE:** Refer to ***carbon.load.min.size.enabled*** for understanding when this configuration needs to be used and its advantages and disadvantages. |
+| enable.inmemory.merge.sort | false | CarbonData sorts and writes data to intermediate files to limit the memory usage. These intermediate files needs to be sorted again using merge sort before writing to the final carbondata file. Performing merge sort in memory would increase the sorting performance at the cost of increased memory footprint. This Configuration specifies to do in-memory merge sort or to do file based merge sort. |
+| carbon.sort.storage.inmemory.size.inmb | 512 | CarbonData writes every ***carbon.sort.size*** number of records to intermediate temp files during data loading to ensure memory footprint is within limits. When ***enable.unsafe.sort*** configuration is enabled, instead of using ***carbon.sort.size*** which is based on rows count, size occupied in memory is used to determine when to flush data pages to intermediate temp files. This configuration determines the memory to be used for storing data pages in memory. **NOTE:** Configuring a higher value ensures more data is maintained in memory and hence increases data loading performance due to reduced or no IO. Based on the memory availability in the nodes of the cluster, configure the values accordingly. |
 | carbon.load.sortmemory.spill.percentage | 0 | During data loading, some data pages are kept in memory upto memory configured in ***carbon.sort.storage.inmemory.size.inmb*** beyond which they are spilled to disk as intermediate temporary sort files. This configuration determines after what percentage data needs to be spilled to disk. **NOTE:** Without this configuration, when the data pages occupy upto configured memory, new data pages would be dumped to disk and old pages are still maintained in disk. |
+| carbon.enable.calculate.size | true | **For Load Operation**: Enabling this property will let carbondata calculate the size of the carbon data file (.carbondata) and the carbon index file (.carbonindex) for each load and update the table status file. **For Describe Formatted**: Enabling this property will let carbondata calculate the total size of the carbon data files and the carbon index files for the each table and display it in describe formatted command. **NOTE:** This is useful to determine the overall size of the carbondata table and also get an idea of how the table is growing in order to take up other backup strategy decisions. |
+| carbon.cutOffTimestamp | (none) | CarbonData has capability to generate the Dictionary values for the timestamp columns from the data itself without the need to store the computed dictionary values. This configuration sets the start date for calculating the timestamp. Java counts the number of milliseconds from start of "1970-01-01 00:00:00". This property is used to customize the start of position. For example "2000-01-01 00:00:00". **NOTE:** The date must be in the form ***carbon.timestamp.format***. CarbonData supports storing data for upto 68 years. For example, if the cut-off time is 1970-01-01 05:30:00, then data upto 2038-01-01 05:30:00 will be supported by CarbonData. |
+| carbon.timegranularity | SECOND | The configuration is used to specify the data granularity level such as DAY, HOUR, MINUTE, or SECOND. This helps to store more than 68 years of data into CarbonData. |
+| carbon.use.local.dir | true | CarbonData,during data loading, writes files to local temp directories before copying the files to HDFS. This configuration is used to specify whether CarbonData can write locally to tmp directory of the container or to the YARN application directory. |
+| carbon.sort.temp.compressor | SNAPPY | CarbonData writes every ***carbon.sort.size*** number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These temporary files can be compressed and written in order to save the storage space. This configuration specifies the name of compressor to be used to compress the intermediate sort temp files during sort procedure in data loading. The valid values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD' and empty. By default, empty means that Carbondata will not compress the sort temp files. **NOTE:** Compressor will be useful if you encounter disk bottleneck. Since the data needs to be compressed and decompressed,it involves additional CPU cycles,but is compensated by the high IO throughput due to less data to be written or read from the disks. |
+| carbon.load.skewedDataOptimization.enabled | false | During data loading,CarbonData would divide the number of blocks equally so as to ensure all executors process same number of blocks. This mechanism satisfies most of the scenarios and ensures maximum parallel processing for optimal data loading performance. In some business scenarios, there might be scenarios where the size of blocks vary significantly and hence some executors would have to do more work if they get blocks containing more data. This configuration enables size based block allocation strategy for data loading. When loading, carbondata will use file size based block allocation strategy for task distribution. It will make sure that all the executors process the same size of data.**NOTE:** This configuration is useful if the size of your input data files varies widely, say 1MB to 1GB. For this configuration to work effectively,knowing the data pattern and size is important and necessary. |
+| enable.data.loading.statistics | false | CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made ***true*** would log additional data loading statistics information to more accurately locate the issues being debugged. **NOTE:** Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time. It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all data loading performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging. |
+| carbon.dictionary.chunk.size | 10000 | CarbonData generates dictionary keys and writes them to separate dictionary file during data loading. To optimize the IO, this configuration determines the number of dictionary keys to be persisted to dictionary file at a time. **NOTE:** Writing to file also serves as a commit point to the dictionary generated. Increasing more values in memory causes more data loss during system or application failure. It is advised to alter this configuration judiciously. |
+| dictionary.worker.threads | 1 | CarbonData supports Optimized data loading by relying on a dictionary server. Dictionary server helps to maintain dictionary values independent of the data loading and there by avoids reading the same input data multiples times. This configuration determines the number of concurrent dictionary generation or request that needs to be served by the dictionary server. **NOTE:** This configuration takes effect when ***carbon.options.single.pass*** is configured as true. Please refer to *carbon.options.single.pass*to understand how dictionary server optimizes data loading. |
+| carbon.dictionary.server.port | 2030 | Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary. Single pass loading can be enabled using the option ***carbon.options.single.pass***. When this option is specified, a dictionary server will be internally started to handle the dictionary generation and query requests. This configuration specifies the port on which the server need to listen for incoming requests. Port value ranges between 0-65535 |
 | carbon.load.directWriteToStorePath.enabled | false | During data load, all the carbondata files are written to local disk and finally copied to the target store location in HDFS/S3. Enabling this parameter will make carbondata files to be written directly onto target HDFS/S3 location bypassing the local disk.**NOTE:** Writing directly to HDFS/S3 saves local disk IO(once for writing the files and again for copying to HDFS/S3) there by improving the performance. But the drawback is when data loading fails or the application crashes, unwanted carbondata files will remain in the target HDFS/S3 location until it is cleared during next data load or by running *CLEAN FILES* DDL command |
 | carbon.options.serialization.null.format | \N | Based on the business scenarios, some columns might need to be loaded with null values. As null value cannot be written in csv files, some special characters might be adopted to specify null values. This configuration can be used to specify the null values format in the data being loaded. |
-| carbon.sort.storage.inmemory.size.inmb | 512 | CarbonData writes every ***carbon.sort.size*** number of records to intermediate temp files during data loading to ensure memory footprint is within limits. When ***enable.unsafe.sort*** configuration is enabled, instead of using ***carbon.sort.size*** which is based on rows count, size occupied in memory is used to determine when to flush data pages to intermediate temp files. This configuration determines the memory to be used for storing data pages in memory. **NOTE:** Configuring a higher value ensures more data is maintained in memory and hence increases data loading performance due to reduced or no IO.Based on the memory availability in the nodes of the cluster, configure the values accordingly. |
 | carbon.column.compressor | snappy | CarbonData will compress the column values using the compressor specified by this configuration. Currently CarbonData supports 'snappy' and 'zstd' compressors. |
 | carbon.minmax.allowed.byte.count | 200 | CarbonData will write the min max values for string/varchar types column using the byte count specified by this configuration. Max value is 1000 bytes(500 characters) and Min value is 10 bytes(5 characters). **NOTE:** This property is useful for reducing the store size thereby improving the query performance but can lead to query degradation if value is not configured properly. | |
 
@@ -101,44 +99,46 @@ This section provides the details of all the configurations required for the Car
 | Parameter | Default Value | Description |
 |-----------------------------------------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | carbon.number.of.cores.while.compacting | 2 | Number of cores to be used while compacting data. This also determines the number of threads to be used to read carbondata files in parallel. |
-| carbon.compaction.level.threshold | 4, 3 | Each CarbonData load will create one segment, if every load is small in size it will generate many small file over a period of time impacting the query performance. This configuration is for minor compaction which decides how many segments to be merged. Configuration is of the form (x,y). Compaction will be triggered for every x segments and form a single level 1 compacted segment. When the number of compacted level 1 segments reach y, compaction will be triggered again to merge them to form a single level 2 segment. For example: If it is set as 2, 3 then minor compaction will be triggered for every 2 segments. 3 is the number of level 1 compacted segments which is further compacted to new segment.**NOTE:** When ***carbon.enable.auto.load.merge*** is **true**, configuring higher values cause overall data loading time to increase as compaction will be triggered after data loading is complete but status is not returned till compaction is co
 mplete. But compacting more number of segments can increase query performance.Hence optimal values needs to be configured based on the business scenario. Valid values are between 0 to 100. |
+| carbon.compaction.level.threshold | 4, 3 | Each CarbonData load will create one segment, if every load is small in size it will generate many small file over a period of time impacting the query performance. This configuration is for minor compaction which decides how many segments to be merged. Configuration is of the form (x,y). Compaction will be triggered for every x segments and form a single level 1 compacted segment. When the number of compacted level 1 segments reach y, compaction will be triggered again to merge them to form a single level 2 segment. For example: If it is set as 2, 3 then minor compaction will be triggered for every 2 segments. 3 is the number of level 1 compacted segments which is further compacted to new segment.**NOTE:** When ***carbon.enable.auto.load.merge*** is **true**, configuring higher values cause overall data loading time to increase as compaction will be triggered after data loading is complete but status is not returned till compaction is co
 mplete. But compacting more number of segments can increase query performance. Hence optimal values needs to be configured based on the business scenario. Valid values are between 0 to 100. |
 | carbon.major.compaction.size | 1024 | To improve query performance and all the segments can be merged and compacted to a single segment upto configured size. This Major compaction size can be configured using this parameter. Sum of the segments which is below this threshold will be merged. This value is expressed in MB. |
-| carbon.horizontal.compaction.enable | true | CarbonData supports DELETE/UPDATE functionality by creating delta data files for existing carbondata files. These delta files would grow as more number of DELETE/UPDATE operations are performed.Compaction of these delta files are termed as horizontal compaction. This configuration is used to turn ON/OFF horizontal compaction. After every DELETE and UPDATE statement, horizontal compaction may occur in case the delta (DELETE/ UPDATE) files becomes more than specified threshold.**NOTE: **Having many delta files will reduce the query performance as scan has to happen on all these files before the final state of data can be decided.Hence it is advisable to keep horizontal compaction enabled and configure reasonable values to ***carbon.horizontal.UPDATE.compaction.threshold*** and ***carbon.horizontal.DELETE.compaction.threshold*** |
-| carbon.horizontal.update.compaction.threshold | 1 | This configuration specifies the threshold limit on number of UPDATE delta files within a segment. In case the number of delta files goes beyond the threshold, the UPDATE delta files within the segment becomes eligible for horizontal compaction and are compacted into single UPDATE delta file.Values range between 1 to 10000. |
-| carbon.horizontal.delete.compaction.threshold | 1 | This configuration specifies the threshold limit on number of DELETE delta files within a block of a segment. In case the number of delta files goes beyond the threshold, the DELETE delta files for the particular block of the segment becomes eligible for horizontal compaction and are compacted into single DELETE delta file.Values range between 1 to 10000. |
-| carbon.update.segment.parallelism | 1 | CarbonData processes the UPDATE operations by grouping records belonging to a segment into a single executor task. When the amount of data to be updated is more, this behavior causes problems like restarting of executor due to low memory and data-spill related errors. This property specifies the parallelism for each segment during update.**NOTE:** It is recommended to set this value to a multiple of the number of executors for balance.Values range between 1 to 1000. |
-| carbon.numberof.preserve.segments | 0 | If the user wants to preserve some number of segments from being compacted then he can set this configuration. Example: carbon.numberof.preserve.segments = 2 then 2 latest segments will always be excluded from the compaction. No segments will be preserved by default.**NOTE:** This configuration is useful when the chances of input data can be wrong due to environment scenarios.Preserving some of the latest segments from being compacted can help to easily delete the wrongly loaded segments.Once compacted,it becomes more difficult to determine the exact data to be deleted(except when data is incrementing according to time) |
-| carbon.allowed.compaction.days | 0 | This configuration is used to control on the number of recent segments that needs to be compacted, ignoring the older ones. This configuration is in days.For Example: If the configuration is 2, then the segments which are loaded in the time frame of past 2 days only will get merged. Segments which are loaded earlier than 2 days will not be merged. This configuration is disabled by default.**NOTE:** This configuration is useful when a bulk of history data is loaded into the carbondata.Query on this data is less frequent.In such cases involving these segments also into compaction will affect the resource consumption, increases overall compaction time. |
-| carbon.enable.auto.load.merge | false | Compaction can be automatically triggered once data load completes. This ensures that the segments are merged in time and thus query times does not increase with increase in segments. This configuration enables to do compaction along with data loading.**NOTE: **Compaction will be triggered once the data load completes.But the status of data load wait till the compaction is completed.Hence it might look like data loading time has increased, but thats not the case.Moreover failure of compaction will not affect the data loading status.If data load had completed successfully, the status would be updated and segments are committed.However, failure while data loading, will not trigger compaction and error is returned immediately. |
+| carbon.horizontal.compaction.enable | true | CarbonData supports DELETE/UPDATE functionality by creating delta data files for existing carbondata files. These delta files would grow as more number of DELETE/UPDATE operations are performed. Compaction of these delta files are termed as horizontal compaction. This configuration is used to turn ON/OFF horizontal compaction. After every DELETE and UPDATE statement, horizontal compaction may occur in case the delta (DELETE/ UPDATE) files becomes more than specified threshold.**NOTE: **Having many delta files will reduce the query performance as scan has to happen on all these files before the final state of data can be decided. Hence it is advisable to keep horizontal compaction enabled and configure reasonable values to ***carbon.horizontal.UPDATE.compaction.threshold*** and ***carbon.horizontal.DELETE.compaction.threshold*** |
+| carbon.horizontal.update.compaction.threshold | 1 | This configuration specifies the threshold limit on number of UPDATE delta files within a segment. In case the number of delta files goes beyond the threshold, the UPDATE delta files within the segment becomes eligible for horizontal compaction and are compacted into single UPDATE delta file. Values range between 1 to 10000. |
+| carbon.horizontal.delete.compaction.threshold | 1 | This configuration specifies the threshold limit on number of DELETE delta files within a block of a segment. In case the number of delta files goes beyond the threshold, the DELETE delta files for the particular block of the segment becomes eligible for horizontal compaction and are compacted into single DELETE delta file. Values range between 1 to 10000. |
+| carbon.update.segment.parallelism | 1 | CarbonData processes the UPDATE operations by grouping records belonging to a segment into a single executor task. When the amount of data to be updated is more, this behavior causes problems like restarting of executor due to low memory and data-spill related errors. This property specifies the parallelism for each segment during update.**NOTE:** It is recommended to set this value to a multiple of the number of executors for balance. Values range between 1 to 1000. |
+| carbon.numberof.preserve.segments | 0 | If the user wants to preserve some number of segments from being compacted then he can set this configuration. Example: carbon.numberof.preserve.segments = 2 then 2 latest segments will always be excluded from the compaction. No segments will be preserved by default.**NOTE:** This configuration is useful when the chances of input data can be wrong due to environment scenarios. Preserving some of the latest segments from being compacted can help to easily delete the wrongly loaded segments. Once compacted,it becomes more difficult to determine the exact data to be deleted(except when data is incrementing according to time) |
+| carbon.allowed.compaction.days | 0 | This configuration is used to control on the number of recent segments that needs to be compacted, ignoring the older ones. This configuration is in days. For Example: If the configuration is 2, then the segments which are loaded in the time frame of past 2 days only will get merged. Segments which are loaded earlier than 2 days will not be merged. This configuration is disabled by default.**NOTE:** This configuration is useful when a bulk of history data is loaded into the carbondata. Query on this data is less frequent. In such cases involving these segments also into compaction will affect the resource consumption, increases overall compaction time. |
+| carbon.enable.auto.load.merge | false | Compaction can be automatically triggered once data load completes. This ensures that the segments are merged in time and thus query times does not increase with increase in segments. This configuration enables to do compaction along with data loading.**NOTE: **Compaction will be triggered once the data load completes. But the status of data load wait till the compaction is completed. Hence it might look like data loading time has increased, but thats not the case. Moreover failure of compaction will not affect the data loading status. If data load had completed successfully, the status would be updated and segments are committed. However, failure while data loading, will not trigger compaction and error is returned immediately. |
 | carbon.enable.page.level.reader.in.compaction|true|Enabling page level reader for compaction reduces the memory usage while compacting more number of segments. It allows reading only page by page instead of reading whole blocklet to memory. **NOTE:** Please refer to [file-structure-of-carbondata](./file-structure-of-carbondata.md#carbondata-file-format) to understand the storage format of CarbonData and concepts of pages.|
 | carbon.concurrent.compaction | true | Compaction of different tables can be executed concurrently. This configuration determines whether to compact all qualifying tables in parallel or not. **NOTE: **Compacting concurrently is a resource demanding operation and needs more resources there by affecting the query performance also. This configuration is **deprecated** and might be removed in future releases. |
-| carbon.compaction.prefetch.enable | false | Compaction operation is similar to Query + data load where in data from qualifying segments are queried and data loading performed to generate a new single segment. This configuration determines whether to query ahead data from segments and feed it for data loading. **NOTE: **This configuration is disabled by default as it needs extra resources for querying extra data.Based on the memory availability on the cluster, user can enable it to improve compaction performance. |
-| carbon.merge.index.in.segment | true | Each CarbonData file has a companion CarbonIndex file which maintains the metadata about the data. These CarbonIndex files are read and loaded into driver and is used subsequently for pruning of data during queries. These CarbonIndex files are very small in size(few KB) and are many.Reading many small files from HDFS is not efficient and leads to slow IO performance.Hence these CarbonIndex files belonging to a segment can be combined into  a single file and read once there by increasing the IO throughput. This configuration enables to merge all the CarbonIndex files into a single MergeIndex file upon data loading completion.**NOTE:** Reading a single big file is more efficient in HDFS and IO throughput is very high.Due to this the time needed to load the index files into memory when query is received for the first time on that table is significantly reduced and there by significantly reduces the delay in serving the first query. |
+| carbon.compaction.prefetch.enable | false | Compaction operation is similar to Query + data load where in data from qualifying segments are queried and data loading performed to generate a new single segment. This configuration determines whether to query ahead data from segments and feed it for data loading. **NOTE: **This configuration is disabled by default as it needs extra resources for querying extra data. Based on the memory availability on the cluster, user can enable it to improve compaction performance. |
+| carbon.merge.index.in.segment | true | Each CarbonData file has a companion CarbonIndex file which maintains the metadata about the data. These CarbonIndex files are read and loaded into driver and is used subsequently for pruning of data during queries. These CarbonIndex files are very small in size(few KB) and are many. Reading many small files from HDFS is not efficient and leads to slow IO performance. Hence these CarbonIndex files belonging to a segment can be combined into  a single file and read once there by increasing the IO throughput. This configuration enables to merge all the CarbonIndex files into a single MergeIndex file upon data loading completion.**NOTE:** Reading a single big file is more efficient in HDFS and IO throughput is very high. Due to this the time needed to load the index files into memory when query is received for the first time on that table is significantly reduced and there by significantly reduces the delay in serving the first query. |
 
 ## Query Configuration
 
 | Parameter | Default Value | Description |
 |--------------------------------------|---------------|---------------------------------------------------|
-| carbon.max.driver.lru.cache.size | -1 | Maximum memory **(in MB)** upto which the driver process can cache the data (BTree and dictionary values). Beyond this, least recently used data will be removed from cache before loading new set of values.Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. **NOTE:** Minimum number of entries that needs to be removed from cache in order to load the new set of data is determined and unloaded.ie.,for example if 3 cache entries qualify for pre-emption, out of these, those entries that free up more cache memory is removed prior to others. Please refer [FAQs](./faq.md#how-to-check-lru-cache-memory-footprint) for checking LRU cache memory footprint. |
-| carbon.max.executor.lru.cache.size | -1 | Maximum memory **(in MB)** upto which the executor process can cache the data (BTree and reverse dictionary values).Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. **NOTE:** If this parameter is not configured, then the value of ***carbon.max.driver.lru.cache.size*** will be used. |
+| carbon.max.driver.lru.cache.size | -1 | Maximum memory **(in MB)** upto which the driver process can cache the data (BTree and dictionary values). Beyond this, least recently used data will be removed from cache before loading new set of values. Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. **NOTE:** Minimum number of entries that needs to be removed from cache in order to load the new set of data is determined and unloaded.ie.,for example if 3 cache entries qualify for pre-emption, out of these, those entries that free up more cache memory is removed prior to others. Please refer [FAQs](./faq.md#how-to-check-lru-cache-memory-footprint) for checking LRU cache memory footprint. |
+| carbon.max.executor.lru.cache.size | -1 | Maximum memory **(in MB)** upto which the executor process can cache the data (BTree and reverse dictionary values). Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. **NOTE:** If this parameter is not configured, then the value of ***carbon.max.driver.lru.cache.size*** will be used. |
 | max.query.execution.time | 60 | Maximum time allowed for one query to be executed. The value is in minutes. |
 | carbon.enableMinMax | true | CarbonData maintains the metadata which enables to prune unnecessary files from being scanned as per the query conditions. To achieve pruning, Min,Max of each column is maintined.Based on the filter condition in the query, certain data can be skipped from scanning by matching the filter value against the min,max values of the column(s) present in that carbondata file. This pruning enhances query performance significantly. |
-| carbon.dynamicallocation.schedulertimeout | 5 | CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. To determine the number of tasks that can be scheduled, knowing the count of active executors is necessary. When dynamic allocation is enabled on a YARN based spark cluster, executor processes are shutdown if no request is received for a particular amount of time. The executors are brought up when the requet is received again. This configuration specifies the maximum time (unit in seconds) the carbon scheduler can wait for executor to be active. Minimum value is 5 sec and maximum value is 15 sec.**NOTE: **Waiting for longer time leads to slow query response time.Moreover it might be possible that YARN is not able to start the executors and waiting is not beneficial. |
-| carbon.scheduler.minregisteredresourcesratio | 0.8 | Specifies the minimum resource (executor) ratio needed for starting the block distribution. The default value is 0.8, which indicates 80% of the requested resource is allocated for starting block distribution. The minimum value is 0.1 min and the maximum value is 1.0. |
+| carbon.dynamical.location.scheduler.timeout | 5 | CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. To determine the number of tasks that can be scheduled, knowing the count of active executors is necessary. When dynamic allocation is enabled on a YARN based spark cluster, executor processes are shutdown if no request is received for a particular amount of time. The executors are brought up when the requet is received again. This configuration specifies the maximum time (unit in seconds) the carbon scheduler can wait for executor to be active. Minimum value is 5 sec and maximum value is 15 sec.**NOTE: **Waiting for longer time leads to slow query response time.Moreover it might be possible that YARN is not able to start the executors and waiting is not beneficial. |
+| carbon.scheduler.min.registered.resources.ratio | 0.8 | Specifies the minimum resource (executor) ratio needed for starting the block distribution. The default value is 0.8, which indicates 80% of the requested resource is allocated for starting block distribution. The minimum value is 0.1 min and the maximum value is 1.0. |
 | carbon.search.enabled (Alpha Feature) | false | If set to true, it will use CarbonReader to do distributed scan directly instead of using compute framework like spark, thus avoiding limitation of compute framework like SQL optimizer and task scheduling overhead. |
 | carbon.search.query.timeout | 10s | Time within which the result is expected from the workers, beyond which the query is terminated |
 | carbon.search.scan.thread | num of cores available in worker node | Number of cores to be used in each worker for performing scan. |
 | carbon.search.master.port | 10020 | Port on which the search master listens for incoming query requests |
 | carbon.search.worker.port | 10021 | Port on which search master communicates with the workers. |
-| carbon.search.worker.workload.limit | 10 * *carbon.search.scan.thread* | Maximum number of active requests that can be sent to a worker.Beyond which the request needs to be rescheduled for later time or to a different worker. |
+| carbon.search.worker.workload.limit | 10 * *carbon.search.scan.thread* | Maximum number of active requests that can be sent to a worker. Beyond which the request needs to be rescheduled for later time or to a different worker. |
 | carbon.detail.batch.size | 100 | The buffer size to store records, returned from the block scan. In limit scenario this parameter is very important. For example your query limit is 1000. But if we set this value to 3000 that means we get 3000 records from scan but spark will only take 1000 rows. So the 2000 remaining are useless. In one Finance test case after we set it to 100, in the limit 1000 scenario the performance increase about 2 times in comparison to if we set this value to 12000. |
 | carbon.enable.vector.reader | true | Spark added vector processing to optimize cpu cache miss and there by increase the query performance. This configuration enables to fetch data as columnar batch of size 4*1024 rows instead of fetching data row by row and provide it to spark so that there is improvement in  select queries performance. |
-| carbon.task.distribution | block | CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData.Each of these task distribution suggestions has its own advantages and disadvantages.Based on the customer use case, appropriate task distribution can be configured.**block**: Setting this value will launch one task per block. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. **custom**: Setting this value will group the blocks and distribute it uniformly to the available resources in the cluster. This enhances the query performance but not suggested in case of concurrent queries and queries having big shuffling scenarios. **blocklet**: Setting this value will launch one task per blocklet. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. **merge_small_files**: S
 etting this value will merge all the small carbondata files upto a bigger size configured by ***spark.sql.files.maxPartitionBytes*** (128 MB is the default value,it is configurable) during querying. The small carbondata files are combined to a map task to reduce the number of read task. This enhances the performance. |
-| carbon.custom.block.distribution | false | CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. When this configuration is true, CarbonData would distribute the available blocks to be scanned among the available number of cores.For Example:If there are 10 blocks to be scanned and only 3 tasks can be run(only 3 executor cores available in the cluster), CarbonData would combine blocks as 4,3,3 and give it to 3 tasks to run. **NOTE:** When this configuration is false, as per the ***carbon.task.distribution*** configuration, each block/blocklet would be given to each task. |
-| enable.query.statistics | false | CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made ***true*** would log additional query statistics information to more accurately locate the issues being debugged.**NOTE:** Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time.It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all query performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging. |
+| carbon.task.distribution | block | CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. Each of these task distribution suggestions has its own advantages and disadvantages. Based on the customer use case, appropriate task distribution can be configured.**block**: Setting this value will launch one task per block. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. **custom**: Setting this value will group the blocks and distribute it uniformly to the available resources in the cluster. This enhances the query performance but not suggested in case of concurrent queries and queries having big shuffling scenarios. **blocklet**: Setting this value will launch one task per blocklet. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. **merge_small_files**:
  Setting this value will merge all the small carbondata files upto a bigger size configured by ***spark.sql.files.maxPartitionBytes*** (128 MB is the default value,it is configurable) during querying. The small carbondata files are combined to a map task to reduce the number of read task. This enhances the performance. |
+| carbon.custom.block.distribution | false | CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. When this configuration is true, CarbonData would distribute the available blocks to be scanned among the available number of cores. For Example:If there are 10 blocks to be scanned and only 3 tasks can be run(only 3 executor cores available in the cluster), CarbonData would combine blocks as 4,3,3 and give it to 3 tasks to run. **NOTE:** When this configuration is false, as per the ***carbon.task.distribution*** configuration, each block/blocklet would be given to each task. |
+| enable.query.statistics | false | CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made ***true*** would log additional query statistics information to more accurately locate the issues being debugged.**NOTE:** Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time. It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all query performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging. |
 | enable.unsafe.in.query.processing | false | CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. This configuration enables to use unsafe functions in CarbonData while scanning the  data during query. |
-| carbon.query.validate.directqueryondatamap | true | CarbonData supports creating pre-aggregate table datamaps as an independent tables. For some debugging purposes, it might be required to directly query from such datamap tables. This configuration allows to query on such datamaps. |
-| carbon.heap.memory.pooling.threshold.bytes | 1048576 | CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. Using unsafe, memory can be allocated on Java Heap or off heap. This configuration controls the allocation mechanism on Java HEAP.If the heap memory allocations of the given size is greater or equal than this value,it should go through the pooling mechanism.But if set this size to -1, it should not go through the pooling mechanism.Default value is 1048576(1MB, the same as Spark).Value to be specified in bytes. |
+| carbon.query.validate.direct.query.on.datamap | true | CarbonData supports creating pre-aggregate table datamaps as an independent tables. For some debugging purposes, it might be required to directly query from such datamap tables. This configuration allows to query on such datamaps. |
+| carbon.max.driver.threads.for.block.pruning | 4 | Number of threads used for driver pruning when the carbon files are more than 100k Maximum memory. This configuration can used to set number of threads between 1 to 4. |
+| carbon.heap.memory.pooling.threshold.bytes | 1048576 | CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. Using unsafe, memory can be allocated on Java Heap or off heap. This configuration controls the allocation mechanism on Java HEAP. If the heap memory allocations of the given size is greater or equal than this value,it should go through the pooling mechanism. But if set this size to -1, it should not go through the pooling mechanism. Default value is 1048576(1MB, the same as Spark). Value to be specified in bytes. |
+| carbon.push.rowfilters.for.vector | false | When enabled complete row filters will be handled by carbon in case of vector. If it is disabled then only page level pruning will be done by carbon and row level filtering will be done by spark for vector. And also there are scan optimizations in carbon to avoid multiple data copies when this parameter is set to false. There is no change in flow for non-vector based queries. |
 
 ## Data Mutation Configuration
 | Parameter | Default Value | Description |
@@ -197,21 +197,21 @@ RESET
 
 | Properties                                | Description                                                  |
 | ----------------------------------------- | ------------------------------------------------------------ |
-| carbon.options.bad.records.logger.enable  | CarbonData can identify the records that are not conformant to schema and isolate them as bad records.Enabling this configuration will make CarbonData to log such bad records.**NOTE:** If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in ***carbon.bad.records.action***. |
+| carbon.options.bad.records.logger.enable  | CarbonData can identify the records that are not conformant to schema and isolate them as bad records. Enabling this configuration will make CarbonData to log such bad records.**NOTE:** If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in ***carbon.bad.records.action***. |
 | carbon.options.bad.records.logger.enable  | To enable or disable bad record logger.                      |
 | carbon.options.bad.records.action         | This property can have four types of actions for bad records FORCE, REDIRECT, IGNORE and FAIL. If set to FORCE then it auto-corrects the data by storing the bad records as NULL. If set to REDIRECT then bad records are written to the raw CSV instead of being loaded. If set to IGNORE then bad records are neither loaded nor written to the raw CSV. If set to FAIL then data loading fails if any bad records are found. |
 | carbon.options.is.empty.data.bad.record   | If false, then empty ("" or '' or ,,) data will not be considered as bad record and vice versa. |
 | carbon.options.batch.sort.size.inmb       | Size of batch data to keep in memory, as a thumb rule it supposed to be less than 45% of sort.inmemory.size.inmb otherwise it may spill intermediate data to disk. |
-| carbon.options.single.pass                | Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary. This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE. **NOTE:** Enabling this starts a new dictionary server to handle dictionary generation requests during data loading. Without this option, the input csv files will have to read twice.Once while dictionary generation and persisting to the dictionary files.second when the data loading need to convert the input data into carbondata format.Enabling this optimizes the optimizes to read the input data only once there by reducing IO and hence over all data loading time.If concurrent data loading needs to be supported, consider tuning ***dictionary.worker.threads***.Port on which the dictio
 nary server need to listen on can be configured using the configuration ***carbon.dictionary.server.port***. |
+| carbon.options.single.pass                | Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary. This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE. **NOTE:** Enabling this starts a new dictionary server to handle dictionary generation requests during data loading. Without this option, the input csv files will have to read twice. Once while dictionary generation and persisting to the dictionary files.second when the data loading need to convert the input data into carbondata format. Enabling this optimizes the optimizes to read the input data only once there by reducing IO and hence over all data loading time. If concurrent data loading needs to be supported, consider tuning ***dictionary.worker.threads***. Port on which the di
 ctionary server need to listen on can be configured using the configuration ***carbon.dictionary.server.port***. |
 | carbon.options.bad.record.path            | Specifies the HDFS path where bad records needs to be stored. |
 | carbon.custom.block.distribution          | Specifies whether to use the Spark or Carbon block distribution feature.**NOTE: **Refer to [Query Configuration](#query-configuration)#carbon.custom.block.distribution for more details on Ca

<TRUNCATED>

[5/8] carbondata-site git commit: Added 1.5.1 version information

Posted by ra...@apache.org.
http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/CSDK-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/CSDK-guide.html b/src/main/webapp/CSDK-guide.html
index 8168aaf..73e1d67 100644
--- a/src/main/webapp/CSDK-guide.html
+++ b/src/main/webapp/CSDK-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -219,119 +219,28 @@
                                 <div class="col-sm-12  col-md-12">
                                     <div>
 <h1>
-<a id="csdk-guide" class="anchor" href="#csdk-guide" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CSDK Guide</h1>
-<p>CarbonData CSDK provides C++ interface to write and read carbon file.
-CSDK use JNI to invoke java SDK in C++ code.</p>
+<a id="c-sdk-guide" class="anchor" href="#c-sdk-guide" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>C++ SDK Guide</h1>
+<p>CarbonData C++ SDK provides C++ interface to write and read carbon file.
+C++ SDK use JNI to invoke java SDK in C++ code.</p>
 <h1>
-<a id="csdk-reader" class="anchor" href="#csdk-reader" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CSDK Reader</h1>
-<p>This CSDK reader reads CarbonData file and carbonindex file at a given path.
+<a id="c-sdk-reader" class="anchor" href="#c-sdk-reader" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>C++ SDK Reader</h1>
+<p>This C++ SDK reader reads CarbonData file and carbonindex file at a given path.
 External client can make use of this reader to read CarbonData files in C++
 code and without CarbonSession.</p>
 <p>In the carbon jars package, there exist a carbondata-sdk.jar,
-including SDK reader for CSDK.</p>
+including SDK reader for C++ SDK.</p>
 <h2>
 <a id="quick-example" class="anchor" href="#quick-example" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Quick example</h2>
-<pre><code>// 1. init JVM
-JavaVM *jvm;
-JNIEnv *initJVM() {
-    JNIEnv *env;
-    JavaVMInitArgs vm_args;
-    int parNum = 3;
-    int res;
-    JavaVMOption options[parNum];
-
-    options[0].optionString = "-Djava.compiler=NONE";
-    options[1].optionString = "-Djava.class.path=../../sdk/target/carbondata-sdk.jar";
-    options[2].optionString = "-verbose:jni";
-    vm_args.version = JNI_VERSION_1_8;
-    vm_args.nOptions = parNum;
-    vm_args.options = options;
-    vm_args.ignoreUnrecognized = JNI_FALSE;
-
-    res = JNI_CreateJavaVM(&amp;jvm, (void **) &amp;env, &amp;vm_args);
-    if (res &lt; 0) {
-        fprintf(stderr, "\nCan't create Java VM\n");
-        exit(1);
-    }
-
-    return env;
-}
-
-// 2. create carbon reader and read data 
-// 2.1 read data from local disk
-/**
- * test read data from local disk, without projection
- *
- * @param env  jni env
- * @return
- */
-bool readFromLocalWithoutProjection(JNIEnv *env) {
-
-    CarbonReader carbonReaderClass;
-    carbonReaderClass.builder(env, "../resources/carbondata", "test");
-    carbonReaderClass.build();
-
-    while (carbonReaderClass.hasNext()) {
-        jobjectArray row = carbonReaderClass.readNextRow();
-        jsize length = env-&gt;GetArrayLength(row);
-        int j = 0;
-        for (j = 0; j &lt; length; j++) {
-            jobject element = env-&gt;GetObjectArrayElement(row, j);
-            char *str = (char *) env-&gt;GetStringUTFChars((jstring) element, JNI_FALSE);
-            printf("%s\t", str);
-        }
-        printf("\n");
-    }
-    carbonReaderClass.close();
-}
-
-// 2.2 read data from S3
-
-/**
- * read data from S3
- * parameter is ak sk endpoint
- *
- * @param env jni env
- * @param argv argument vector
- * @return
- */
-bool readFromS3(JNIEnv *env, char *argv[]) {
-    CarbonReader reader;
-
-    char *args[3];
-    // "your access key"
-    args[0] = argv[1];
-    // "your secret key"
-    args[1] = argv[2];
-    // "your endPoint"
-    args[2] = argv[3];
-
-    reader.builder(env, "s3a://sdk/WriterOutput", "test");
-    reader.withHadoopConf(3, args);
-    reader.build();
-    printf("\nRead data from S3:\n");
-    while (reader.hasNext()) {
-        jobjectArray row = reader.readNextRow();
-        jsize length = env-&gt;GetArrayLength(row);
-
-        int j = 0;
-        for (j = 0; j &lt; length; j++) {
-            jobject element = env-&gt;GetObjectArrayElement(row, j);
-            char *str = (char *) env-&gt;GetStringUTFChars((jstring) element, JNI_FALSE);
-            printf("%s\t", str);
-        }
-        printf("\n");
-    }
-
-    reader.close();
-}
-
-// 3. destory JVM
-    (jvm)-&gt;DestroyJavaVM();
-</code></pre>
-<p>Find example code at main.cpp of CSDK module</p>
+<p>Please find example code at  <a href="https://github.com/apache/carbondata/blob/master/store/CSDK/test/main.cpp" target=_blank>main.cpp</a> of CSDK module</p>
+<p>When users use C++ to read carbon files, users should init JVM first. Then users create
+carbon reader and read data.There are some example code of read data from local disk<br>
+and read data from S3 at main.cpp of CSDK module.  Finally, users need to
+release the memory and destroy JVM.</p>
+<p>C++ SDK support read batch row. User can set batch by using withBatch(int batch) before build, and read batch by using readNextBatchRow().</p>
 <h2>
 <a id="api-list" class="anchor" href="#api-list" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>API List</h2>
+<h3>
+<a id="carbonreader" class="anchor" href="#carbonreader" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CarbonReader</h3>
 <pre><code>    /**
      * create a CarbonReaderBuilder object for building carbonReader,
      * CarbonReaderBuilder object  can configure different parameter
@@ -342,8 +251,17 @@ bool readFromS3(JNIEnv *env, char *argv[]) {
      * @return CarbonReaderBuilder object
      */
     jobject builder(JNIEnv *env, char *path, char *tableName);
-
-    /**
+</code></pre>
+<pre><code>    /**
+     * create a CarbonReaderBuilder object for building carbonReader,
+     * CarbonReaderBuilder object  can configure different parameter
+     *
+     * @param env JNIEnv
+     * @param path data store path
+     * */
+    void builder(JNIEnv *env, char *path);
+</code></pre>
+<pre><code>    /**
      * Configure the projection column names of carbon reader
      *
      * @param argc argument counter
@@ -351,8 +269,8 @@ bool readFromS3(JNIEnv *env, char *argv[]) {
      * @return CarbonReaderBuilder object
      */
     jobject projection(int argc, char *argv[]);
-
-    /**
+</code></pre>
+<pre><code>    /**
      *  build carbon reader with argument vector
      *  it support multiple parameter
      *  like: key=value
@@ -363,36 +281,239 @@ bool readFromS3(JNIEnv *env, char *argv[]) {
      * @return CarbonReaderBuilder object
      **/
     jobject withHadoopConf(int argc, char *argv[]);
-
-    /**
+</code></pre>
+<pre><code>   /**
+     * Sets the batch size of records to read
+     *
+     * @param batch batch size
+     * @return CarbonReaderBuilder object
+     */
+    void withBatch(int batch);
+</code></pre>
+<pre><code>    /**
+     * Configure Row Record Reader for reading.
+     */
+    void withRowRecordReader();
+</code></pre>
+<pre><code>    /**
      * build carbonReader object for reading data
      * it support read data from load disk
      *
      * @return carbonReader object
      */
     jobject build();
-
-    /**
+</code></pre>
+<pre><code>    /**
      * Whether it has next row data
      *
      * @return boolean value, if it has next row, return true. if it hasn't next row, return false.
      */
     jboolean hasNext();
-
-    /**
-     * read next row from data
+</code></pre>
+<pre><code>    /**
+     * read next carbonRow from data
+     * @return carbonRow object of one row
+     */
+     jobject readNextRow();
+</code></pre>
+<pre><code>    /**
+     * read Next Batch Row
      *
-     * @return object array of one row
+     * @return rows
      */
-    jobjectArray readNextRow();
-
-    /**
+    jobjectArray readNextBatchRow();
+</code></pre>
+<pre><code>    /**
      * close the carbon reader
      *
      * @return  boolean value
      */
     jboolean close();
-
+</code></pre>
+<h1>
+<a id="c-sdk-writer" class="anchor" href="#c-sdk-writer" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>C++ SDK Writer</h1>
+<p>This C++ SDK writer writes CarbonData file and carbonindex file at a given path.
+External client can make use of this writer to write CarbonData files in C++
+code and without CarbonSession. C++ SDK already supports S3 and local disk.</p>
+<p>In the carbon jars package, there exist a carbondata-sdk.jar,
+including SDK writer for C++ SDK.</p>
+<h2>
+<a id="quick-example-1" class="anchor" href="#quick-example-1" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Quick example</h2>
+<p>Please find example code at  <a href="https://github.com/apache/carbondata/blob/master/store/CSDK/test/main.cpp" target=_blank>main.cpp</a> of CSDK module</p>
+<p>When users use C++ to write carbon files, users should init JVM first. Then users create
+carbon writer and write data.There are some example code of write data to local disk<br>
+and write data to S3 at main.cpp of CSDK module.  Finally, users need to
+release the memory and destroy JVM.</p>
+<h2>
+<a id="api-list-1" class="anchor" href="#api-list-1" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>API List</h2>
+<h3>
+<a id="carbonwriter" class="anchor" href="#carbonwriter" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CarbonWriter</h3>
+<pre><code>    /**
+     * create a CarbonWriterBuilder object for building carbonWriter,
+     * CarbonWriterBuilder object  can configure different parameter
+     *
+     * @param env JNIEnv
+     * @return CarbonWriterBuilder object
+     */
+    void builder(JNIEnv *env);
+</code></pre>
+<pre><code>    /**
+     * Sets the output path of the writer builder
+     *
+     * @param path is the absolute path where output files are written
+     * This method must be called when building CarbonWriterBuilder
+     * @return updated CarbonWriterBuilder
+     */
+    void outputPath(char *path);
+</code></pre>
+<pre><code>    /**
+     * configure the schema with json style schema
+     *
+     * @param jsonSchema json style schema
+     * @return updated CarbonWriterBuilder
+     */
+    void withCsvInput(char *jsonSchema);
+</code></pre>
+<pre><code>    /**
+    * Updates the hadoop configuration with the given key value
+    *
+    * @param key key word
+    * @param value value
+    * @return CarbonWriterBuilder object
+    */
+    void withHadoopConf(char *key, char *value);
+</code></pre>
+<pre><code>    /**
+     * @param appName appName which is writing the carbondata files
+     */
+    void writtenBy(char *appName);
+</code></pre>
+<pre><code>    /**
+     * build carbonWriter object for writing data
+     * it support write data from load disk
+     *
+     * @return carbonWriter object
+     */
+    void build();
+</code></pre>
+<pre><code>    /**
+     * Write an object to the file, the format of the object depends on the
+     * implementation.
+     * Note: This API is not thread safe
+     */
+    void write(jobject obj);
+</code></pre>
+<pre><code>    /**
+     * close the carbon Writer
+     */
+    void close();
+</code></pre>
+<h3>
+<a id="carbonschemareader" class="anchor" href="#carbonschemareader" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CarbonSchemaReader</h3>
+<pre><code>    /**
+     * constructor with jni env
+     *
+     * @param env  jni env
+     */
+    CarbonSchemaReader(JNIEnv *env);
+</code></pre>
+<pre><code>    /**
+     * read schema from path,
+     * path can be folder path, carbonindex file path, and carbondata file path
+     * and will not check all files schema
+     *
+     * @param path file/folder path
+     * @return schema
+     */
+    jobject readSchema(char *path);
+</code></pre>
+<pre><code>    /**
+     *  read schema from path,
+     *  path can be folder path, carbonindex file path, and carbondata file path
+     *  and user can decide whether check all files schema
+     *
+     * @param path carbon data path
+     * @param validateSchema whether check all files schema
+     * @return schema
+     */
+    jobject readSchema(char *path, bool validateSchema);
+</code></pre>
+<h3>
+<a id="schema" class="anchor" href="#schema" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Schema</h3>
+<pre><code> /**
+     * constructor with jni env and carbon schema data
+     *
+     * @param env jni env
+     * @param schema  carbon schema data
+     */
+    Schema(JNIEnv *env, jobject schema);
+</code></pre>
+<pre><code>    /**
+     * get fields length of schema
+     *
+     * @return fields length
+     */
+    int getFieldsLength();
+</code></pre>
+<pre><code>    /**
+     * get field name by ordinal
+     *
+     * @param ordinal the data index of carbon schema
+     * @return ordinal field name
+     */
+    char *getFieldName(int ordinal);
+</code></pre>
+<pre><code>    /**
+     * get  field data type name by ordinal
+     *
+     * @param ordinal the data index of carbon schema
+     * @return ordinal field data type name
+     */
+    char *getFieldDataTypeName(int ordinal);
+</code></pre>
+<pre><code>    /**
+     * get  array child element data type name by ordinal
+     *
+     * @param ordinal the data index of carbon schema
+     * @return ordinal array child element data type name
+     */
+    char *getArrayElementTypeName(int ordinal);
+</code></pre>
+<h3>
+<a id="carbonproperties" class="anchor" href="#carbonproperties" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CarbonProperties</h3>
+<pre><code>  /**
+     * Constructor of CarbonProperties
+     *
+     * @param env JNI env
+     */
+    CarbonProperties(JNIEnv *env);
+</code></pre>
+<pre><code>    /**
+     * This method will be used to add a new property
+     * 
+     * @param key property key
+     * @param value property value
+     * @return CarbonProperties object
+     */
+    jobject addProperty(char *key, char *value);
+</code></pre>
+<pre><code>    /**
+     * This method will be used to get the properties value
+     *
+     * @param key  property key
+     * @return  property value
+     */
+    char *getProperty(char *key);
+</code></pre>
+<pre><code>    /**
+     * This method will be used to get the properties value
+     * if property is not present then it will return the default value
+     *
+     * @param key  property key
+     * @param defaultValue  property default Value
+     * @return
+     */
+    char *getProperty(char *key, char *defaultValue);
 </code></pre>
 <script>
 $(function() {

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/bloomfilter-datamap-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/bloomfilter-datamap-guide.html b/src/main/webapp/bloomfilter-datamap-guide.html
index 19ee42a..aab8dc0 100644
--- a/src/main/webapp/bloomfilter-datamap-guide.html
+++ b/src/main/webapp/bloomfilter-datamap-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/carbon-as-spark-datasource-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/carbon-as-spark-datasource-guide.html b/src/main/webapp/carbon-as-spark-datasource-guide.html
index dd1e092..9ffca8f 100644
--- a/src/main/webapp/carbon-as-spark-datasource-guide.html
+++ b/src/main/webapp/carbon-as-spark-datasource-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>


[8/8] carbondata-site git commit: Added 1.5.1 version information

Posted by ra...@apache.org.
Added 1.5.1 version information


Project: http://git-wip-us.apache.org/repos/asf/carbondata-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata-site/commit/ae77df2e
Tree: http://git-wip-us.apache.org/repos/asf/carbondata-site/tree/ae77df2e
Diff: http://git-wip-us.apache.org/repos/asf/carbondata-site/diff/ae77df2e

Branch: refs/heads/asf-site
Commit: ae77df2e4b6a69e37daac94993f4e58e40fb7d1d
Parents: 4574ecc
Author: Raghunandan S <ca...@gmail.com>
Authored: Fri Dec 7 12:38:57 2018 +0530
Committer: Raghunandan S <ca...@gmail.com>
Committed: Fri Dec 7 17:53:57 2018 +0530

----------------------------------------------------------------------
 content/CSDK-guide.html                         | 369 +++++++++++------
 content/WEB-INF/classes/application.conf        |   2 +-
 content/bloomfilter-datamap-guide.html          |   6 +-
 content/carbon-as-spark-datasource-guide.html   |   6 +-
 content/configuration-parameters.html           | 274 ++++++-------
 content/datamap-developer-guide.html            |   6 +-
 content/datamap-management.html                 |   6 +-
 content/ddl-of-carbondata.html                  |  61 ++-
 content/dml-of-carbondata.html                  |  12 +-
 content/documentation.html                      |   8 +-
 content/faq.html                                |  18 +-
 content/file-structure-of-carbondata.html       |   9 +-
 .../how-to-contribute-to-apache-carbondata.html |   6 +-
 content/index.html                              |  15 +-
 content/introduction.html                       |   6 +-
 content/language-manual.html                    |   6 +-
 content/lucene-datamap-guide.html               |   6 +-
 content/performance-tuning.html                 |  21 +-
 content/preaggregate-datamap-guide.html         |   6 +-
 content/quick-start-guide.html                  |   8 +-
 content/release-guide.html                      |   6 +-
 content/s3-guide.html                           |   6 +-
 content/sdk-guide.html                          | 144 ++++++-
 content/security.html                           |   3 +
 content/segment-management-on-carbondata.html   |   6 +-
 content/streaming-guide.html                    |  44 +-
 content/supported-data-types-in-carbondata.html |   6 +-
 content/timeseries-datamap-guide.html           |   6 +-
 content/usecases.html                           |  18 +-
 content/videogallery.html                       |   6 +-
 src/main/resources/application.conf             |   2 +-
 src/main/scala/html/header.html                 |   6 +-
 src/main/scala/scripts/CSDK-guide               |  11 -
 src/main/scala/scripts/csdk-guide               |  11 +
 src/main/webapp/CSDK-guide.html                 | 369 +++++++++++------
 src/main/webapp/bloomfilter-datamap-guide.html  |   6 +-
 .../carbon-as-spark-datasource-guide.html       |   6 +-
 src/main/webapp/configuration-parameters.html   | 274 ++++++-------
 src/main/webapp/datamap-developer-guide.html    |   6 +-
 src/main/webapp/datamap-management.html         |   6 +-
 src/main/webapp/ddl-of-carbondata.html          |  61 ++-
 src/main/webapp/dml-of-carbondata.html          |  12 +-
 src/main/webapp/documentation.html              |   8 +-
 src/main/webapp/faq.html                        |  18 +-
 .../webapp/file-structure-of-carbondata.html    |   9 +-
 .../how-to-contribute-to-apache-carbondata.html |   6 +-
 src/main/webapp/index.html                      |   6 +-
 src/main/webapp/introduction.html               |   6 +-
 src/main/webapp/language-manual.html            |   6 +-
 src/main/webapp/lucene-datamap-guide.html       |   6 +-
 src/main/webapp/performance-tuning.html         |  21 +-
 src/main/webapp/preaggregate-datamap-guide.html |   6 +-
 src/main/webapp/quick-start-guide.html          |   8 +-
 src/main/webapp/release-guide.html              |   6 +-
 src/main/webapp/s3-guide.html                   |   6 +-
 src/main/webapp/sdk-guide.html                  | 144 ++++++-
 src/main/webapp/security.html                   |   3 +
 .../segment-management-on-carbondata.html       |   6 +-
 src/main/webapp/streaming-guide.html            |  44 +-
 .../supported-data-types-in-carbondata.html     |   6 +-
 src/main/webapp/timeseries-datamap-guide.html   |   6 +-
 src/main/webapp/usecases.html                   |  18 +-
 src/main/webapp/videogallery.html               |   6 +-
 src/site/markdown/CSDK-guide.md                 | 398 ++++++++++++++-----
 src/site/markdown/configuration-parameters.md   | 130 +++---
 src/site/markdown/ddl-of-carbondata.md          |  33 +-
 src/site/markdown/dml-of-carbondata.md          |   6 +-
 src/site/markdown/documentation.md              |   2 +-
 src/site/markdown/faq.md                        |   8 +-
 .../markdown/file-structure-of-carbondata.md    |   3 +-
 src/site/markdown/performance-tuning.md         |   7 +-
 src/site/markdown/quick-start-guide.md          |   2 +-
 src/site/markdown/sdk-guide.md                  | 165 +++++++-
 src/site/markdown/streaming-guide.md            |  40 +-
 src/site/markdown/usecases.md                   |   2 -
 75 files changed, 1946 insertions(+), 1061 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/CSDK-guide.html
----------------------------------------------------------------------
diff --git a/content/CSDK-guide.html b/content/CSDK-guide.html
index 8168aaf..73e1d67 100644
--- a/content/CSDK-guide.html
+++ b/content/CSDK-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -219,119 +219,28 @@
                                 <div class="col-sm-12  col-md-12">
                                     <div>
 <h1>
-<a id="csdk-guide" class="anchor" href="#csdk-guide" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CSDK Guide</h1>
-<p>CarbonData CSDK provides C++ interface to write and read carbon file.
-CSDK use JNI to invoke java SDK in C++ code.</p>
+<a id="c-sdk-guide" class="anchor" href="#c-sdk-guide" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>C++ SDK Guide</h1>
+<p>CarbonData C++ SDK provides C++ interface to write and read carbon file.
+C++ SDK use JNI to invoke java SDK in C++ code.</p>
 <h1>
-<a id="csdk-reader" class="anchor" href="#csdk-reader" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CSDK Reader</h1>
-<p>This CSDK reader reads CarbonData file and carbonindex file at a given path.
+<a id="c-sdk-reader" class="anchor" href="#c-sdk-reader" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>C++ SDK Reader</h1>
+<p>This C++ SDK reader reads CarbonData file and carbonindex file at a given path.
 External client can make use of this reader to read CarbonData files in C++
 code and without CarbonSession.</p>
 <p>In the carbon jars package, there exist a carbondata-sdk.jar,
-including SDK reader for CSDK.</p>
+including SDK reader for C++ SDK.</p>
 <h2>
 <a id="quick-example" class="anchor" href="#quick-example" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Quick example</h2>
-<pre><code>// 1. init JVM
-JavaVM *jvm;
-JNIEnv *initJVM() {
-    JNIEnv *env;
-    JavaVMInitArgs vm_args;
-    int parNum = 3;
-    int res;
-    JavaVMOption options[parNum];
-
-    options[0].optionString = "-Djava.compiler=NONE";
-    options[1].optionString = "-Djava.class.path=../../sdk/target/carbondata-sdk.jar";
-    options[2].optionString = "-verbose:jni";
-    vm_args.version = JNI_VERSION_1_8;
-    vm_args.nOptions = parNum;
-    vm_args.options = options;
-    vm_args.ignoreUnrecognized = JNI_FALSE;
-
-    res = JNI_CreateJavaVM(&amp;jvm, (void **) &amp;env, &amp;vm_args);
-    if (res &lt; 0) {
-        fprintf(stderr, "\nCan't create Java VM\n");
-        exit(1);
-    }
-
-    return env;
-}
-
-// 2. create carbon reader and read data 
-// 2.1 read data from local disk
-/**
- * test read data from local disk, without projection
- *
- * @param env  jni env
- * @return
- */
-bool readFromLocalWithoutProjection(JNIEnv *env) {
-
-    CarbonReader carbonReaderClass;
-    carbonReaderClass.builder(env, "../resources/carbondata", "test");
-    carbonReaderClass.build();
-
-    while (carbonReaderClass.hasNext()) {
-        jobjectArray row = carbonReaderClass.readNextRow();
-        jsize length = env-&gt;GetArrayLength(row);
-        int j = 0;
-        for (j = 0; j &lt; length; j++) {
-            jobject element = env-&gt;GetObjectArrayElement(row, j);
-            char *str = (char *) env-&gt;GetStringUTFChars((jstring) element, JNI_FALSE);
-            printf("%s\t", str);
-        }
-        printf("\n");
-    }
-    carbonReaderClass.close();
-}
-
-// 2.2 read data from S3
-
-/**
- * read data from S3
- * parameter is ak sk endpoint
- *
- * @param env jni env
- * @param argv argument vector
- * @return
- */
-bool readFromS3(JNIEnv *env, char *argv[]) {
-    CarbonReader reader;
-
-    char *args[3];
-    // "your access key"
-    args[0] = argv[1];
-    // "your secret key"
-    args[1] = argv[2];
-    // "your endPoint"
-    args[2] = argv[3];
-
-    reader.builder(env, "s3a://sdk/WriterOutput", "test");
-    reader.withHadoopConf(3, args);
-    reader.build();
-    printf("\nRead data from S3:\n");
-    while (reader.hasNext()) {
-        jobjectArray row = reader.readNextRow();
-        jsize length = env-&gt;GetArrayLength(row);
-
-        int j = 0;
-        for (j = 0; j &lt; length; j++) {
-            jobject element = env-&gt;GetObjectArrayElement(row, j);
-            char *str = (char *) env-&gt;GetStringUTFChars((jstring) element, JNI_FALSE);
-            printf("%s\t", str);
-        }
-        printf("\n");
-    }
-
-    reader.close();
-}
-
-// 3. destory JVM
-    (jvm)-&gt;DestroyJavaVM();
-</code></pre>
-<p>Find example code at main.cpp of CSDK module</p>
+<p>Please find example code at  <a href="https://github.com/apache/carbondata/blob/master/store/CSDK/test/main.cpp" target=_blank>main.cpp</a> of CSDK module</p>
+<p>When users use C++ to read carbon files, users should init JVM first. Then users create
+carbon reader and read data.There are some example code of read data from local disk<br>
+and read data from S3 at main.cpp of CSDK module.  Finally, users need to
+release the memory and destroy JVM.</p>
+<p>C++ SDK support read batch row. User can set batch by using withBatch(int batch) before build, and read batch by using readNextBatchRow().</p>
 <h2>
 <a id="api-list" class="anchor" href="#api-list" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>API List</h2>
+<h3>
+<a id="carbonreader" class="anchor" href="#carbonreader" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CarbonReader</h3>
 <pre><code>    /**
      * create a CarbonReaderBuilder object for building carbonReader,
      * CarbonReaderBuilder object  can configure different parameter
@@ -342,8 +251,17 @@ bool readFromS3(JNIEnv *env, char *argv[]) {
      * @return CarbonReaderBuilder object
      */
     jobject builder(JNIEnv *env, char *path, char *tableName);
-
-    /**
+</code></pre>
+<pre><code>    /**
+     * create a CarbonReaderBuilder object for building carbonReader,
+     * CarbonReaderBuilder object  can configure different parameter
+     *
+     * @param env JNIEnv
+     * @param path data store path
+     * */
+    void builder(JNIEnv *env, char *path);
+</code></pre>
+<pre><code>    /**
      * Configure the projection column names of carbon reader
      *
      * @param argc argument counter
@@ -351,8 +269,8 @@ bool readFromS3(JNIEnv *env, char *argv[]) {
      * @return CarbonReaderBuilder object
      */
     jobject projection(int argc, char *argv[]);
-
-    /**
+</code></pre>
+<pre><code>    /**
      *  build carbon reader with argument vector
      *  it support multiple parameter
      *  like: key=value
@@ -363,36 +281,239 @@ bool readFromS3(JNIEnv *env, char *argv[]) {
      * @return CarbonReaderBuilder object
      **/
     jobject withHadoopConf(int argc, char *argv[]);
-
-    /**
+</code></pre>
+<pre><code>   /**
+     * Sets the batch size of records to read
+     *
+     * @param batch batch size
+     * @return CarbonReaderBuilder object
+     */
+    void withBatch(int batch);
+</code></pre>
+<pre><code>    /**
+     * Configure Row Record Reader for reading.
+     */
+    void withRowRecordReader();
+</code></pre>
+<pre><code>    /**
      * build carbonReader object for reading data
      * it support read data from load disk
      *
      * @return carbonReader object
      */
     jobject build();
-
-    /**
+</code></pre>
+<pre><code>    /**
      * Whether it has next row data
      *
      * @return boolean value, if it has next row, return true. if it hasn't next row, return false.
      */
     jboolean hasNext();
-
-    /**
-     * read next row from data
+</code></pre>
+<pre><code>    /**
+     * read next carbonRow from data
+     * @return carbonRow object of one row
+     */
+     jobject readNextRow();
+</code></pre>
+<pre><code>    /**
+     * read Next Batch Row
      *
-     * @return object array of one row
+     * @return rows
      */
-    jobjectArray readNextRow();
-
-    /**
+    jobjectArray readNextBatchRow();
+</code></pre>
+<pre><code>    /**
      * close the carbon reader
      *
      * @return  boolean value
      */
     jboolean close();
-
+</code></pre>
+<h1>
+<a id="c-sdk-writer" class="anchor" href="#c-sdk-writer" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>C++ SDK Writer</h1>
+<p>This C++ SDK writer writes CarbonData file and carbonindex file at a given path.
+External client can make use of this writer to write CarbonData files in C++
+code and without CarbonSession. C++ SDK already supports S3 and local disk.</p>
+<p>In the carbon jars package, there exist a carbondata-sdk.jar,
+including SDK writer for C++ SDK.</p>
+<h2>
+<a id="quick-example-1" class="anchor" href="#quick-example-1" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Quick example</h2>
+<p>Please find example code at  <a href="https://github.com/apache/carbondata/blob/master/store/CSDK/test/main.cpp" target=_blank>main.cpp</a> of CSDK module</p>
+<p>When users use C++ to write carbon files, users should init JVM first. Then users create
+carbon writer and write data.There are some example code of write data to local disk<br>
+and write data to S3 at main.cpp of CSDK module.  Finally, users need to
+release the memory and destroy JVM.</p>
+<h2>
+<a id="api-list-1" class="anchor" href="#api-list-1" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>API List</h2>
+<h3>
+<a id="carbonwriter" class="anchor" href="#carbonwriter" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CarbonWriter</h3>
+<pre><code>    /**
+     * create a CarbonWriterBuilder object for building carbonWriter,
+     * CarbonWriterBuilder object  can configure different parameter
+     *
+     * @param env JNIEnv
+     * @return CarbonWriterBuilder object
+     */
+    void builder(JNIEnv *env);
+</code></pre>
+<pre><code>    /**
+     * Sets the output path of the writer builder
+     *
+     * @param path is the absolute path where output files are written
+     * This method must be called when building CarbonWriterBuilder
+     * @return updated CarbonWriterBuilder
+     */
+    void outputPath(char *path);
+</code></pre>
+<pre><code>    /**
+     * configure the schema with json style schema
+     *
+     * @param jsonSchema json style schema
+     * @return updated CarbonWriterBuilder
+     */
+    void withCsvInput(char *jsonSchema);
+</code></pre>
+<pre><code>    /**
+    * Updates the hadoop configuration with the given key value
+    *
+    * @param key key word
+    * @param value value
+    * @return CarbonWriterBuilder object
+    */
+    void withHadoopConf(char *key, char *value);
+</code></pre>
+<pre><code>    /**
+     * @param appName appName which is writing the carbondata files
+     */
+    void writtenBy(char *appName);
+</code></pre>
+<pre><code>    /**
+     * build carbonWriter object for writing data
+     * it support write data from load disk
+     *
+     * @return carbonWriter object
+     */
+    void build();
+</code></pre>
+<pre><code>    /**
+     * Write an object to the file, the format of the object depends on the
+     * implementation.
+     * Note: This API is not thread safe
+     */
+    void write(jobject obj);
+</code></pre>
+<pre><code>    /**
+     * close the carbon Writer
+     */
+    void close();
+</code></pre>
+<h3>
+<a id="carbonschemareader" class="anchor" href="#carbonschemareader" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CarbonSchemaReader</h3>
+<pre><code>    /**
+     * constructor with jni env
+     *
+     * @param env  jni env
+     */
+    CarbonSchemaReader(JNIEnv *env);
+</code></pre>
+<pre><code>    /**
+     * read schema from path,
+     * path can be folder path, carbonindex file path, and carbondata file path
+     * and will not check all files schema
+     *
+     * @param path file/folder path
+     * @return schema
+     */
+    jobject readSchema(char *path);
+</code></pre>
+<pre><code>    /**
+     *  read schema from path,
+     *  path can be folder path, carbonindex file path, and carbondata file path
+     *  and user can decide whether check all files schema
+     *
+     * @param path carbon data path
+     * @param validateSchema whether check all files schema
+     * @return schema
+     */
+    jobject readSchema(char *path, bool validateSchema);
+</code></pre>
+<h3>
+<a id="schema" class="anchor" href="#schema" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Schema</h3>
+<pre><code> /**
+     * constructor with jni env and carbon schema data
+     *
+     * @param env jni env
+     * @param schema  carbon schema data
+     */
+    Schema(JNIEnv *env, jobject schema);
+</code></pre>
+<pre><code>    /**
+     * get fields length of schema
+     *
+     * @return fields length
+     */
+    int getFieldsLength();
+</code></pre>
+<pre><code>    /**
+     * get field name by ordinal
+     *
+     * @param ordinal the data index of carbon schema
+     * @return ordinal field name
+     */
+    char *getFieldName(int ordinal);
+</code></pre>
+<pre><code>    /**
+     * get  field data type name by ordinal
+     *
+     * @param ordinal the data index of carbon schema
+     * @return ordinal field data type name
+     */
+    char *getFieldDataTypeName(int ordinal);
+</code></pre>
+<pre><code>    /**
+     * get  array child element data type name by ordinal
+     *
+     * @param ordinal the data index of carbon schema
+     * @return ordinal array child element data type name
+     */
+    char *getArrayElementTypeName(int ordinal);
+</code></pre>
+<h3>
+<a id="carbonproperties" class="anchor" href="#carbonproperties" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CarbonProperties</h3>
+<pre><code>  /**
+     * Constructor of CarbonProperties
+     *
+     * @param env JNI env
+     */
+    CarbonProperties(JNIEnv *env);
+</code></pre>
+<pre><code>    /**
+     * This method will be used to add a new property
+     * 
+     * @param key property key
+     * @param value property value
+     * @return CarbonProperties object
+     */
+    jobject addProperty(char *key, char *value);
+</code></pre>
+<pre><code>    /**
+     * This method will be used to get the properties value
+     *
+     * @param key  property key
+     * @return  property value
+     */
+    char *getProperty(char *key);
+</code></pre>
+<pre><code>    /**
+     * This method will be used to get the properties value
+     * if property is not present then it will return the default value
+     *
+     * @param key  property key
+     * @param defaultValue  property default Value
+     * @return
+     */
+    char *getProperty(char *key, char *defaultValue);
 </code></pre>
 <script>
 $(function() {

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/WEB-INF/classes/application.conf
----------------------------------------------------------------------
diff --git a/content/WEB-INF/classes/application.conf b/content/WEB-INF/classes/application.conf
index 020b127..2f1b695 100644
--- a/content/WEB-INF/classes/application.conf
+++ b/content/WEB-INF/classes/application.conf
@@ -17,7 +17,7 @@ fileList=["configuration-parameters",
   "how-to-contribute-to-apache-carbondata",
   "introduction",
   "usecases",
-  "CSDK-guide",
+  "csdk-guide",
   "carbon-as-spark-datasource-guide"
   ]
 dataMapFileList=[

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/bloomfilter-datamap-guide.html
----------------------------------------------------------------------
diff --git a/content/bloomfilter-datamap-guide.html b/content/bloomfilter-datamap-guide.html
index 19ee42a..aab8dc0 100644
--- a/content/bloomfilter-datamap-guide.html
+++ b/content/bloomfilter-datamap-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/carbon-as-spark-datasource-guide.html
----------------------------------------------------------------------
diff --git a/content/carbon-as-spark-datasource-guide.html b/content/carbon-as-spark-datasource-guide.html
index dd1e092..9ffca8f 100644
--- a/content/carbon-as-spark-datasource-guide.html
+++ b/content/carbon-as-spark-datasource-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>


[4/8] carbondata-site git commit: Added 1.5.1 version information

Posted by ra...@apache.org.
http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/configuration-parameters.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/configuration-parameters.html b/src/main/webapp/configuration-parameters.html
index 5c334eb..5cc7a45 100644
--- a/src/main/webapp/configuration-parameters.html
+++ b/src/main/webapp/configuration-parameters.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -220,7 +220,7 @@
                                     <div>
 <h1>
 <a id="configuring-carbondata" class="anchor" href="#configuring-carbondata" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Configuring CarbonData</h1>
-<p>This guide explains the configurations that can be used to tune CarbonData to achieve better performance.Most of the properties that control the internal settings have reasonable default values. They are listed along with the properties along with explanation.</p>
+<p>This guide explains the configurations that can be used to tune CarbonData to achieve better performance. Most of the properties that control the internal settings have reasonable default values. They are listed along with the properties along with explanation.</p>
 <ul>
 <li><a href="#system-configuration">System Configuration</a></li>
 <li><a href="#data-loading-configuration">Data Loading Configuration</a></li>
@@ -244,7 +244,7 @@
 <tr>
 <td>carbon.storelocation</td>
 <td>spark.sql.warehouse.dir property value</td>
-<td>Location where CarbonData will create the store, and write the data in its custom format. If not specified,the path defaults to spark.sql.warehouse.dir property. <strong>NOTE:</strong> Store location should be in HDFS.</td>
+<td>Location where CarbonData will create the store, and write the data in its custom format. If not specified,the path defaults to spark.sql.warehouse.dir property. <strong>NOTE:</strong> Store location should be in HDFS or S3.</td>
 </tr>
 <tr>
 <td>carbon.ddl.base.hdfs.url</td>
@@ -269,17 +269,17 @@
 <tr>
 <td>carbon.query.show.datamaps</td>
 <td>true</td>
-<td>CarbonData stores datamaps as independent tables so as to allow independent maintenance to some extent. When this property is true,which is by default, show tables command will list all the tables including datatmaps(eg: Preaggregate table), else datamaps will be excluded from the table list.<strong>NOTE:</strong>  It is generally not required for the user to do any maintenance operations on these tables and hence not required to be seen.But it is shown by default so that user or admin can get clear understanding of the system for capacity planning.</td>
+<td>CarbonData stores datamaps as independent tables so as to allow independent maintenance to some extent. When this property is true,which is by default, show tables command will list all the tables including datatmaps(eg: Preaggregate table), else datamaps will be excluded from the table list.<strong>NOTE:</strong>  It is generally not required for the user to do any maintenance operations on these tables and hence not required to be seen. But it is shown by default so that user or admin can get clear understanding of the system for capacity planning.</td>
 </tr>
 <tr>
 <td>carbon.segment.lock.files.preserve.hours</td>
 <td>48</td>
-<td>In order to support parallel data loading onto the same table, CarbonData sequences(locks) at the granularity of segments.Operations affecting the segment(like IUD, alter) are blocked from parallel operations. This property value indicates the number of hours the segment lock files will be preserved after dataload. These lock files will be deleted with the clean command after the configured number of hours.</td>
+<td>In order to support parallel data loading onto the same table, CarbonData sequences(locks) at the granularity of segments. Operations affecting the segment(like IUD, alter) are blocked from parallel operations. This property value indicates the number of hours the segment lock files will be preserved after dataload. These lock files will be deleted with the clean command after the configured number of hours.</td>
 </tr>
 <tr>
 <td>carbon.timestamp.format</td>
 <td>yyyy-MM-dd HH:mm:ss</td>
-<td>CarbonData can understand data of timestamp type and process it in special manner.It can be so that the format of Timestamp data is different from that understood by CarbonData by default. This configuration allows users to specify the format of Timestamp in their data.</td>
+<td>CarbonData can understand data of timestamp type and process it in special manner. It can be so that the format of Timestamp data is different from that understood by CarbonData by default. This configuration allows users to specify the format of Timestamp in their data.</td>
 </tr>
 <tr>
 <td>carbon.lock.type</td>
@@ -292,14 +292,19 @@
 <td>This configuration specifies the path where lock files have to be created. Recommended to configure zookeeper lock type or configure HDFS lock path(to this property) in case of S3 file system as locking is not feasible on S3.</td>
 </tr>
 <tr>
+<td>enable.offheap.sort</td>
+<td>true</td>
+<td>Whether carbondata will use offheap or onheap memory. By default, the value is true and carbondata will use the property value from <em>carbon.unsafe.working.memory.in.mb</em> or <em>carbon.unsafe.driver.working.memory.in.mb</em> as the amount of memory; if it is false, carbondata will use the minimum value between the configured amount of unsafe memory and the 60% of JVM Heap Memory as the amount of memory.</td>
+</tr>
+<tr>
 <td>carbon.unsafe.working.memory.in.mb</td>
 <td>512</td>
-<td>CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. The Minimum value recommeded is 512MB. Any value below this is reset to default value of 512MB. <strong>NOTE:</strong> The below formulas explain how to arrive at the off-heap size required.Memory Required For Data Loading:(<em>carbon.number.of.cores.while.loading</em>) * (Number of tables to load in parallel) * (<em>offheap.sort.chunk.size.inmb</em> + <em>carbon.blockletgroup.size.in.mb</em> + <em>carbon.blockletgroup.size.in.mb</em>/3.5 ). Memory required for Query:SPARK_EXECUTOR_INSTANCES * (<em>carbon.blockletgroup.size.in.mb</em> + <em>carbon.blockletgroup.size.in.mb</em> * 3.5) * spark.executor.cores</td>
+<td>CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. The Minimum value recommeded is 512MB. Any value below this is reset to default value of 512MB. <strong>NOTE:</strong> The below formulas explain how to arrive at the off-heap size required.Memory Required For Data Loading per executor: (<em>carbon.number.of.cores.while.loading</em>) * (Number of tables to load in parallel) * (<em>offheap.sort.chunk.size.inmb</em> + <em>carbon.blockletgroup.size.in.mb</em> + <em>carbon.blockletgroup.size.in.mb</em>/3.5 ). Memory required for Query per executor: (<em>carbon.blockletgroup.size.in.mb</em> + <em>carbon.blockletgroup.size.in.mb</em> * 3.5) * spark.executor.cores</td>
 </tr>
 <tr>
 <td>carbon.unsafe.driver.working.memory.in.mb</td>
-<td>60% of JVM Heap Memory</td>
-<td>CarbonData supports storing data in unsafe on-heap memory in driver for certain operations like insert into, query for loading datamap cache. The Minimum value recommended is 512MB.</td>
+<td>(none)</td>
+<td>CarbonData supports storing data in unsafe on-heap memory in driver for certain operations like insert into, query for loading datamap cache. The Minimum value recommended is 512MB. If this configuration is not set, carbondata will use the value of <code>carbon.unsafe.working.memory.in.mb</code>.</td>
 </tr>
 <tr>
 <td>carbon.update.sync.folder</td>
@@ -309,12 +314,12 @@
 <tr>
 <td>carbon.invisible.segments.preserve.count</td>
 <td>200</td>
-<td>CarbonData maintains each data load entry in tablestatus file. The entries from this file are not deleted for those segments that are compacted or dropped, but are made invisible. If the number of data loads are very high, the size and number of entries in tablestatus file can become too many causing unnecessary reading of all data. This configuration specifies the number of segment entries to be maintained afte they are compacted or dropped.Beyond this, the entries are moved to a separate history tablestatus file. <strong>NOTE:</strong> The entries in tablestatus file help to identify the operations performed on CarbonData table and is also used for checkpointing during various data manupulation operations. This is similar to AUDIT file maintaining all the operations and its status.Hence the entries are never deleted but moved to a separate history file.</td>
+<td>CarbonData maintains each data load entry in tablestatus file. The entries from this file are not deleted for those segments that are compacted or dropped, but are made invisible. If the number of data loads are very high, the size and number of entries in tablestatus file can become too many causing unnecessary reading of all data. This configuration specifies the number of segment entries to be maintained afte they are compacted or dropped. Beyond this, the entries are moved to a separate history tablestatus file. <strong>NOTE:</strong> The entries in tablestatus file help to identify the operations performed on CarbonData table and is also used for checkpointing during various data manupulation operations. This is similar to AUDIT file maintaining all the operations and its status. Hence the entries are never deleted but moved to a separate history file.</td>
 </tr>
 <tr>
 <td>carbon.lock.retries</td>
 <td>3</td>
-<td>CarbonData ensures consistency of operations by blocking certain operations from running in parallel. In order to block the operations from running in parallel, lock is obtained on the table. This configuration specifies the maximum number of retries to obtain the lock for any operations other than load. <strong>NOTE:</strong> Data manupulation operations like Compaction,UPDATE,DELETE  or LOADING,UPDATE,DELETE are not allowed to run in parallel.How ever data loading can happen in parallel to compaction.</td>
+<td>CarbonData ensures consistency of operations by blocking certain operations from running in parallel. In order to block the operations from running in parallel, lock is obtained on the table. This configuration specifies the maximum number of retries to obtain the lock for any operations other than load. <strong>NOTE:</strong> Data manupulation operations like Compaction,UPDATE,DELETE  or LOADING,UPDATE,DELETE are not allowed to run in parallel. How ever data loading can happen in parallel to compaction.</td>
 </tr>
 <tr>
 <td>carbon.lock.retry.timeout.sec</td>
@@ -335,14 +340,55 @@
 </thead>
 <tbody>
 <tr>
+<td>carbon.concurrent.lock.retries</td>
+<td>100</td>
+<td>CarbonData supports concurrent data loading onto same table. To ensure the loading status is correctly updated into the system,locks are used to sequence the status updation step. This configuration specifies the maximum number of retries to obtain the lock for updating the load status. <strong>NOTE:</strong> This value is high as more number of concurrent loading happens,more the chances of not able to obtain the lock when tried. Adjust this value according to the number of concurrent loading to be supported by the system.</td>
+</tr>
+<tr>
+<td>carbon.concurrent.lock.retry.timeout.sec</td>
+<td>1</td>
+<td>Specifies the interval between the retries to obtain the lock for concurrent operations. <strong>NOTE:</strong> Refer to <em><strong>carbon.concurrent.lock.retries</strong></em> for understanding why CarbonData uses locks during data loading operations.</td>
+</tr>
+<tr>
+<td>carbon.csv.read.buffersize.byte</td>
+<td>1048576</td>
+<td>CarbonData uses Hadoop InputFormat to read the csv files. This configuration value is used to pass buffer size as input for the Hadoop MR job when reading the csv files. This value is configured in bytes. <strong>NOTE:</strong> Refer to <em><strong>org.apache.hadoop.mapreduce. InputFormat</strong></em> documentation for additional information.</td>
+</tr>
+<tr>
+<td>carbon.loading.prefetch</td>
+<td>false</td>
+<td>CarbonData uses univocity parser to read csv files. This configuration is used to inform the parser whether it can prefetch the data from csv files to speed up the reading.<strong>NOTE:</strong> Enabling prefetch improves the data loading performance, but needs higher memory to keep more records which are read ahead from disk.</td>
+</tr>
+<tr>
+<td>carbon.skip.empty.line</td>
+<td>false</td>
+<td>The csv files givent to CarbonData for loading can contain empty lines. Based on the business scenario, this empty line might have to be ignored or needs to be treated as NULL value for all columns. In order to define this business behavior, this configuration is provided.<strong>NOTE:</strong> In order to consider NULL values for non string columns and continue with data load, <em><strong>carbon.bad.records.action</strong></em> need to be set to <strong>FORCE</strong>;else data load will be failed as bad records encountered.</td>
+</tr>
+<tr>
 <td>carbon.number.of.cores.while.loading</td>
 <td>2</td>
 <td>Number of cores to be used while loading data. This also determines the number of threads to be used to read the input files (csv) in parallel.<strong>NOTE:</strong> This configured value is used in every data loading step to parallelize the operations. Configuring a higher value can lead to increased early thread pre-emption by OS and there by reduce the overall performance.</td>
 </tr>
 <tr>
-<td>carbon.sort.size</td>
-<td>100000</td>
-<td>Number of records to hold in memory to sort and write intermediate temp files.<strong>NOTE:</strong> Memory required for data loading increases with increase in configured value as each thread would cache configured number of records.</td>
+<td>enable.unsafe.sort</td>
+<td>true</td>
+<td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. This configuration enables to use unsafe functions in CarbonData. <strong>NOTE:</strong> For operations like data loading, which generates more short lived Java objects, Java GC can be a bottle neck. Using unsafe can overcome the GC overhead and improve the overall performance.</td>
+</tr>
+<tr>
+<td>enable.offheap.sort</td>
+<td>true</td>
+<td>CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. This configuration enables using off-heap memory for sorting of data during data loading.<strong>NOTE:</strong>  <em><strong>enable.unsafe.sort</strong></em> configuration needs to be configured to true for using off-heap</td>
+</tr>
+<tr>
+<td>carbon.load.sort.scope</td>
+<td>LOCAL_SORT</td>
+<td>CarbonData can support various sorting options to match the balance between load and query performance. LOCAL_SORT:All the data given to an executor in the single load is fully sorted and written to carbondata files. Data loading performance is reduced a little as the entire data needs to be sorted in the executor. BATCH_SORT:Sorts the data in batches of configured size and writes to carbondata files. Data loading performance increases as the entire data need not be sorted. But query performance will get reduced due to false positives in block pruning and also due to more number of carbondata files written. Due to more number of carbondata files, if identified blocks &gt; cluster parallelism, query performance and concurrency will get reduced. GLOBAL SORT:Entire data in the data load is fully sorted and written to carbondata files. Data loading performance would get reduced as the entire data needs to be sorted. But the query performance increases significantly due to very less 
 false positives and concurrency is also improved. <strong>NOTE:</strong> when BATCH_SORT is configured, it is recommended to keep <em><strong>carbon.load.batch.sort.size.inmb</strong></em> &gt; <em><strong>carbon.blockletgroup.size.in.mb</strong></em>
+</td>
+</tr>
+<tr>
+<td>carbon.load.batch.sort.size.inmb</td>
+<td>0</td>
+<td>When  <em><strong>carbon.load.sort.scope</strong></em> is configured as <em><strong>BATCH_SORT</strong></em>, this configuration needs to be added to specify the batch size for sorting and writing to carbondata files. <strong>NOTE:</strong> It is recommended to keep the value around 45% of <em><strong>carbon.sort.storage.inmemory.size.inmb</strong></em> to avoid spill to disk. Also it is recommended to keep the value higher than <em><strong>carbon.blockletgroup.size.in.mb</strong></em>. Refer to <em>carbon.load.sort.scope</em> for more information on sort options and the advantages/disadvantages of each option.</td>
 </tr>
 <tr>
 <td>carbon.global.sort.rdd.storage.level</td>
@@ -352,12 +398,17 @@
 <tr>
 <td>carbon.load.global.sort.partitions</td>
 <td>0</td>
-<td>The Number of partitions to use when shuffling data for sort. Default value 0 means to use same number of map tasks as reduce tasks.<strong>NOTE:</strong> In general, it is recommended to have 2-3 tasks per CPU core in your cluster.</td>
+<td>The number of partitions to use when shuffling data for global sort. Default value 0 means to use same number of map tasks as reduce tasks. <strong>NOTE:</strong> In general, it is recommended to have 2-3 tasks per CPU core in your cluster.</td>
+</tr>
+<tr>
+<td>carbon.sort.size</td>
+<td>100000</td>
+<td>Number of records to hold in memory to sort and write intermediate sort temp files. <strong>NOTE:</strong> Memory required for data loading will increase if you turn this value bigger. Besides each thread will cache this amout of records. The number of threads is configured by <em>carbon.number.of.cores.while.loading</em>.</td>
 </tr>
 <tr>
 <td>carbon.options.bad.records.logger.enable</td>
 <td>false</td>
-<td>CarbonData can identify the records that are not conformant to schema and isolate them as bad records. Enabling this configuration will make CarbonData to log such bad records.<strong>NOTE:</strong> If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in <em><strong>carbon.bad.records.action</strong></em>.</td>
+<td>CarbonData can identify the records that are not conformant to schema and isolate them as bad records. Enabling this configuration will make CarbonData to log such bad records. <strong>NOTE:</strong> If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in <em><strong>carbon.bad.records.action</strong></em>.</td>
 </tr>
 <tr>
 <td>carbon.bad.records.action</td>
@@ -372,58 +423,63 @@
 <tr>
 <td>carbon.options.bad.record.path</td>
 <td>(none)</td>
-<td>Specifies the HDFS path where bad records are to be stored. By default the value is Null. This path must to be configured by the user if <em><strong>carbon.options.bad.records.logger.enable</strong></em> is <strong>true</strong> or <em><strong>carbon.bad.records.action</strong></em> is <strong>REDIRECT</strong>.</td>
+<td>Specifies the HDFS path where bad records are to be stored. By default the value is Null. This path must be configured by the user if <em><strong>carbon.options.bad.records.logger.enable</strong></em> is <strong>true</strong> or <em><strong>carbon.bad.records.action</strong></em> is <strong>REDIRECT</strong>.</td>
 </tr>
 <tr>
 <td>carbon.blockletgroup.size.in.mb</td>
 <td>64</td>
-<td>Please refer to <a href="./file-structure-of-carbondata.html#carbondata-file-format">file-structure-of-carbondata</a> to understand the storage format of CarbonData. The data are read as a group of blocklets which are called blocklet groups. This parameter specifies the size of each blocklet group. Higher value results in better sequential IO access. The minimum value is 16MB, any value lesser than 16MB will reset to the default value (64MB).<strong>NOTE:</strong> Configuring a higher value might lead to poor performance as an entire blocklet group will have to read into memory before processing.For filter queries with limit, it is <strong>not advisable</strong> to have a bigger blocklet size. For Aggregation queries which need to return more number of rows,bigger blocklet size is advisable.</td>
+<td>Please refer to <a href="./file-structure-of-carbondata.html#carbondata-file-format">file-structure-of-carbondata</a> to understand the storage format of CarbonData. The data are read as a group of blocklets which are called blocklet groups. This parameter specifies the size of each blocklet group. Higher value results in better sequential IO access. The minimum value is 16MB, any value lesser than 16MB will reset to the default value (64MB). <strong>NOTE:</strong> Configuring a higher value might lead to poor performance as an entire blocklet group will have to read into memory before processing. For filter queries with limit, it is <strong>not advisable</strong> to have a bigger blocklet size. For aggregation queries which need to return more number of rows, bigger blocklet size is advisable.</td>
 </tr>
 <tr>
 <td>carbon.sort.file.write.buffer.size</td>
 <td>16384</td>
-<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. This configuration determines the buffer size to be used for reading and writing such files. <strong>NOTE:</strong> This configuration is useful to tune IO and derive optimal performance.Based on the OS and underlying harddisk type, these values can significantly affect the overall performance.It is ideal to tune the buffersize equivalent to the IO buffer size of the OS.Recommended range is between 10240 to 10485760 bytes.</td>
+<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. This configuration determines the buffer size to be used for reading and writing such files. <strong>NOTE:</strong> This configuration is useful to tune IO and derive optimal performance. Based on the OS and underlying harddisk type, these values can significantly affect the overall performance. It is ideal to tune the buffer size equivalent to the IO buffer size of the OS. Recommended range is between 10240 and 10485760 bytes.</td>
 </tr>
 <tr>
 <td>carbon.sort.intermediate.files.limit</td>
 <td>20</td>
-<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. Before writing the target carbondat file, the data in these intermediate files needs to be sorted again so as to ensure the entire data in the data load is sorted. This configuration determines the minimum number of intermediate files after which merged sort is applied on them sort the data.<strong>NOTE:</strong> Intermediate merging happens on a separate thread in the background.Number of threads used is determined by <em><strong>carbon.merge.sort.reader.thread</strong></em>.Configuring a low value will cause more time to be spent in merging these intermediate merged files which can cause more IO.Configuring a high value would cause not to use the idle threads to do intermediate sort merges.Range of recommended values are between 2 and 50</td>
-</tr>
-<tr>
-<td>carbon.csv.read.buffersize.byte</td>
-<td>1048576</td>
-<td>CarbonData uses Hadoop InputFormat to read the csv files. This configuration value is used to pass buffer size as input for the Hadoop MR job when reading the csv files. This value is configured in bytes.<strong>NOTE:</strong> Refer to <em><strong>org.apache.hadoop.mapreduce.InputFormat</strong></em> documentation for additional information.</td>
+<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. Before writing the target carbondata file, the records in these intermediate files needs to be merged to reduce the number of intermediate files. This configuration determines the minimum number of intermediate files after which merged sort is applied on them sort the data. <strong>NOTE:</strong> Intermediate merging happens on a separate thread in the background. Number of threads used is determined by <em><strong>carbon.merge.sort.reader.thread</strong></em>. Configuring a low value will cause more time to be spent in merging these intermediate merged files which can cause more IO. Configuring a high value would cause not to use the idle threads to do intermediate sort merges. Recommended range is between 2 and 50.</td>
 </tr>
 <tr>
 <td>carbon.merge.sort.reader.thread</td>
 <td>3</td>
-<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. When the intermediate files reaches <em><strong>carbon.sort.intermediate.files.limit</strong></em> the files will be merged,the number of threads specified in this configuration will be used to read the intermediate files for performing merge sort.<strong>NOTE:</strong> Refer to <em><strong>carbon.sort.intermediate.files.limit</strong></em> for operation description.Configuring less  number of threads can cause merging to slow down over loading process where as configuring more number of threads can cause thread contention with threads in other data loading steps.Hence configure a fraction of <em><strong>carbon.number.of.cores.while.loading</strong></em>.</td>
+<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. When the intermediate files reaches <em><strong>carbon.sort.intermediate.files.limit</strong></em>, the files will be merged in another thread pool. This value will control the size of the pool. Each thread will read the intermediate files and do merge sort and finally write the records to another file. <strong>NOTE:</strong> Refer to <em><strong>carbon.sort.intermediate.files.limit</strong></em> for operation description. Configuring smaller number of threads can cause merging slow down over loading process whereas configuring larger number of threads can cause thread contention with threads in other data loading steps. Hence configure a fraction of <em><strong>carbon.number.of.cores.while.loading</strong></em>.</td>
 </tr>
 <tr>
-<td>carbon.concurrent.lock.retries</td>
-<td>100</td>
-<td>CarbonData supports concurrent data loading onto same table. To ensure the loading status is correctly updated into the system,locks are used to sequence the status updation step. This configuration specifies the maximum number of retries to obtain the lock for updating the load status. <strong>NOTE:</strong> This value is high as more number of concurrent loading happens,more the chances of not able to obtain the lock when tried. Adjust this value according to the number of concurrent loading to be supported by the system.</td>
+<td>carbon.merge.sort.prefetch</td>
+<td>true</td>
+<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These intermediate temp files will have to be sorted using merge sort before writing into CarbonData format. This configuration enables pre fetching of data from these temp files in order to optimize IO and speed up data loading process.</td>
 </tr>
 <tr>
-<td>carbon.concurrent.lock.retry.timeout.sec</td>
-<td>1</td>
-<td>Specifies the interval between the retries to obtain the lock for concurrent operations. <strong>NOTE:</strong> Refer to <em><strong>carbon.concurrent.lock.retries</strong></em> for understanding why CarbonData uses locks during data loading operations.</td>
+<td>carbon.prefetch.buffersize</td>
+<td>1000</td>
+<td>When the configuration <em><strong>carbon.merge.sort.prefetch</strong></em> is configured to true, we need to set the number of records that can be prefetched. This configuration is used specify the number of records to be prefetched.**NOTE: **Configuring more number of records to be prefetched increases memory footprint as more records will have to be kept in memory.</td>
 </tr>
 <tr>
-<td>carbon.skip.empty.line</td>
+<td>enable.inmemory.merge.sort</td>
 <td>false</td>
-<td>The csv files givent to CarbonData for loading can contain empty lines. Based on the business scenario, this empty line might have to be ignored or needs to be treated as NULL value for all columns.In order to define this business behavior, this configuration is provided.<strong>NOTE:</strong> In order to consider NULL values for non string columns and continue with data load, <em><strong>carbon.bad.records.action</strong></em> need to be set to <strong>FORCE</strong>;else data load will be failed as bad records encountered.</td>
+<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. These intermediate files needs to be sorted again using merge sort before writing to the final carbondata file. Performing merge sort in memory would increase the sorting performance at the cost of increased memory footprint. This Configuration specifies to do in-memory merge sort or to do file based merge sort.</td>
+</tr>
+<tr>
+<td>carbon.sort.storage.inmemory.size.inmb</td>
+<td>512</td>
+<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. When <em><strong>enable.unsafe.sort</strong></em> configuration is enabled, instead of using <em><strong>carbon.sort.size</strong></em> which is based on rows count, size occupied in memory is used to determine when to flush data pages to intermediate temp files. This configuration determines the memory to be used for storing data pages in memory. <strong>NOTE:</strong> Configuring a higher value ensures more data is maintained in memory and hence increases data loading performance due to reduced or no IO. Based on the memory availability in the nodes of the cluster, configure the values accordingly.</td>
+</tr>
+<tr>
+<td>carbon.load.sortmemory.spill.percentage</td>
+<td>0</td>
+<td>During data loading, some data pages are kept in memory upto memory configured in <em><strong>carbon.sort.storage.inmemory.size.inmb</strong></em> beyond which they are spilled to disk as intermediate temporary sort files. This configuration determines after what percentage data needs to be spilled to disk. <strong>NOTE:</strong> Without this configuration, when the data pages occupy upto configured memory, new data pages would be dumped to disk and old pages are still maintained in disk.</td>
 </tr>
 <tr>
 <td>carbon.enable.calculate.size</td>
 <td>true</td>
 <td>
-<strong>For Load Operation</strong>: Setting this property calculates the size of the carbon data file (.carbondata) and carbon index file (.carbonindex) for every load and updates the table status file. <strong>For Describe Formatted</strong>: Setting this property calculates the total size of the carbon data files and carbon index files for the respective table and displays in describe formatted command. <strong>NOTE:</strong> This is useful to determine the overall size of the carbondata table and also get an idea of how the table is growing in order to take up other backup strategy decisions.</td>
+<strong>For Load Operation</strong>: Enabling this property will let carbondata calculate the size of the carbon data file (.carbondata) and the carbon index file (.carbonindex) for each load and update the table status file. <strong>For Describe Formatted</strong>: Enabling this property will let carbondata calculate the total size of the carbon data files and the carbon index files for the each table and display it in describe formatted command. <strong>NOTE:</strong> This is useful to determine the overall size of the carbondata table and also get an idea of how the table is growing in order to take up other backup strategy decisions.</td>
 </tr>
 <tr>
 <td>carbon.cutOffTimestamp</td>
 <td>(none)</td>
-<td>CarbonData has capability to generate the Dictionary values for the timestamp columns from the data itself without the need to store the computed dictionary values. This configuration sets the start date for calculating the timestamp. Java counts the number of milliseconds from start of "1970-01-01 00:00:00". This property is used to customize the start of position. For example "2000-01-01 00:00:00". <strong>NOTE:</strong> The date must be in the form <em><strong>carbon.timestamp.format</strong></em>. CarbonData supports storing data for upto 68 years.For example, if the cut-off time is 1970-01-01 05:30:00, then data upto 2038-01-01 05:30:00 will be supported by CarbonData.</td>
+<td>CarbonData has capability to generate the Dictionary values for the timestamp columns from the data itself without the need to store the computed dictionary values. This configuration sets the start date for calculating the timestamp. Java counts the number of milliseconds from start of "1970-01-01 00:00:00". This property is used to customize the start of position. For example "2000-01-01 00:00:00". <strong>NOTE:</strong> The date must be in the form <em><strong>carbon.timestamp.format</strong></em>. CarbonData supports storing data for upto 68 years. For example, if the cut-off time is 1970-01-01 05:30:00, then data upto 2038-01-01 05:30:00 will be supported by CarbonData.</td>
 </tr>
 <tr>
 <td>carbon.timegranularity</td>
@@ -432,99 +488,38 @@
 </tr>
 <tr>
 <td>carbon.use.local.dir</td>
-<td>false</td>
+<td>true</td>
 <td>CarbonData,during data loading, writes files to local temp directories before copying the files to HDFS. This configuration is used to specify whether CarbonData can write locally to tmp directory of the container or to the YARN application directory.</td>
 </tr>
 <tr>
-<td>carbon.use.multiple.temp.dir</td>
-<td>false</td>
-<td>When multiple disks are present in the system, YARN is generally configured with multiple disks to be used as temp directories for managing the containers. This configuration specifies whether to use multiple YARN local directories during data loading for disk IO load balancing.Enable <em><strong>carbon.use.local.dir</strong></em> for this configuration to take effect. <strong>NOTE:</strong> Data Loading is an IO intensive operation whose performance can be limited by the disk IO threshold, particularly during multi table concurrent data load.Configuring this parameter, balances the disk IO across multiple disks there by improving the over all load performance.</td>
-</tr>
-<tr>
 <td>carbon.sort.temp.compressor</td>
-<td>(none)</td>
-<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These temporary files can be compressed and written in order to save the storage space. This configuration specifies the name of compressor to be used to compress the intermediate sort temp files during sort procedure in data loading. The valid values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD' and empty. By default, empty means that Carbondata will not compress the sort temp files. <strong>NOTE:</strong> Compressor will be useful if you encounter disk bottleneck.Since the data needs to be compressed and decompressed,it involves additional CPU cycles,but is compensated by the high IO throughput due to less data to be written or read from the disks.</td>
+<td>SNAPPY</td>
+<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These temporary files can be compressed and written in order to save the storage space. This configuration specifies the name of compressor to be used to compress the intermediate sort temp files during sort procedure in data loading. The valid values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD' and empty. By default, empty means that Carbondata will not compress the sort temp files. <strong>NOTE:</strong> Compressor will be useful if you encounter disk bottleneck. Since the data needs to be compressed and decompressed,it involves additional CPU cycles,but is compensated by the high IO throughput due to less data to be written or read from the disks.</td>
 </tr>
 <tr>
 <td>carbon.load.skewedDataOptimization.enabled</td>
 <td>false</td>
-<td>During data loading,CarbonData would divide the number of blocks equally so as to ensure all executors process same number of blocks. This mechanism satisfies most of the scenarios and ensures maximum parallel processing for optimal data loading performance.In some business scenarios, there might be scenarios where the size of blocks vary significantly and hence some executors would have to do more work if they get blocks containing more data. This configuration enables size based block allocation strategy for data loading. When loading, carbondata will use file size based block allocation strategy for task distribution. It will make sure that all the executors process the same size of data.<strong>NOTE:</strong> This configuration is useful if the size of your input data files varies widely, say 1MB to 1GB.For this configuration to work effectively,knowing the data pattern and size is important and necessary.</td>
-</tr>
-<tr>
-<td>carbon.load.min.size.enabled</td>
-<td>false</td>
-<td>During Data Loading, CarbonData would divide the number of files among the available executors to parallelize the loading operation. When the input data files are very small, this action causes to generate many small carbondata files. This configuration determines whether to enable node minumun input data size allocation strategy for data loading.It will make sure that the node load the minimum amount of data there by reducing number of carbondata files.<strong>NOTE:</strong> This configuration is useful if the size of the input data files are very small, like 1MB to 256MB.Refer to <em><strong>load_min_size_inmb</strong></em> to configure the minimum size to be considered for splitting files among executors.</td>
+<td>During data loading,CarbonData would divide the number of blocks equally so as to ensure all executors process same number of blocks. This mechanism satisfies most of the scenarios and ensures maximum parallel processing for optimal data loading performance. In some business scenarios, there might be scenarios where the size of blocks vary significantly and hence some executors would have to do more work if they get blocks containing more data. This configuration enables size based block allocation strategy for data loading. When loading, carbondata will use file size based block allocation strategy for task distribution. It will make sure that all the executors process the same size of data.<strong>NOTE:</strong> This configuration is useful if the size of your input data files varies widely, say 1MB to 1GB. For this configuration to work effectively,knowing the data pattern and size is important and necessary.</td>
 </tr>
 <tr>
 <td>enable.data.loading.statistics</td>
 <td>false</td>
-<td>CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made <em><strong>true</strong></em> would log additional data loading statistics information to more accurately locate the issues being debugged. <strong>NOTE:</strong> Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time.It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all data loading performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging.</td>
+<td>CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made <em><strong>true</strong></em> would log additional data loading statistics information to more accurately locate the issues being debugged. <strong>NOTE:</strong> Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time. It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all data loading performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging.</td>
 </tr>
 <tr>
 <td>carbon.dictionary.chunk.size</td>
 <td>10000</td>
-<td>CarbonData generates dictionary keys and writes them to separate dictionary file during data loading. To optimize the IO, this configuration determines the number of dictionary keys to be persisted to dictionary file at a time. <strong>NOTE:</strong> Writing to file also serves as a commit point to the dictionary generated.Increasing more values in memory causes more data loss during system or application failure.It is advised to alter this configuration judiciously.</td>
+<td>CarbonData generates dictionary keys and writes them to separate dictionary file during data loading. To optimize the IO, this configuration determines the number of dictionary keys to be persisted to dictionary file at a time. <strong>NOTE:</strong> Writing to file also serves as a commit point to the dictionary generated. Increasing more values in memory causes more data loss during system or application failure. It is advised to alter this configuration judiciously.</td>
 </tr>
 <tr>
 <td>dictionary.worker.threads</td>
 <td>1</td>
-<td>CarbonData supports Optimized data loading by relying on a dictionary server. Dictionary server helps to maintain dictionary values independent of the data loading and there by avoids reading the same input data multiples times. This configuration determines the number of concurrent dictionary generation or request that needs to be served by the dictionary server. <strong>NOTE:</strong> This configuration takes effect when <em><strong>carbon.options.single.pass</strong></em> is configured as true.Please refer to <em>carbon.options.single.pass</em>to understand how dictionary server optimizes data loading.</td>
-</tr>
-<tr>
-<td>enable.unsafe.sort</td>
-<td>true</td>
-<td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. This configuration enables to use unsafe functions in CarbonData. <strong>NOTE:</strong> For operations like data loading, which generates more short lived Java objects, Java GC can be a bottle neck. Using unsafe can overcome the GC overhead and improve the overall performance.</td>
-</tr>
-<tr>
-<td>enable.offheap.sort</td>
-<td>true</td>
-<td>CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. This configuration enables using off-heap memory for sorting of data during data loading.<strong>NOTE:</strong>  <em><strong>enable.unsafe.sort</strong></em> configuration needs to be configured to true for using off-heap</td>
-</tr>
-<tr>
-<td>enable.inmemory.merge.sort</td>
-<td>false</td>
-<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. These intermediate files needs to be sorted again using merge sort before writing to the final carbondata file.Performing merge sort in memory would increase the sorting performance at the cost of increased memory footprint. This Configuration specifies to do in-memory merge sort or to do file based merge sort.</td>
-</tr>
-<tr>
-<td>carbon.load.sort.scope</td>
-<td>LOCAL_SORT</td>
-<td>CarbonData can support various sorting options to match the balance between load and query performance. LOCAL_SORT:All the data given to an executor in the single load is fully sorted and written to carbondata files. Data loading performance is reduced a little as the entire data needs to be sorted in the executor. BATCH_SORT:Sorts the data in batches of configured size and writes to carbondata files. Data loading performance increases as the entire data need not be sorted.But query performance will get reduced due to false positives in block pruning and also due to more number of carbondata files written.Due to more number of carbondata files, if identified blocks &gt; cluster parallelism, query performance and concurrency will get reduced.GLOBAL SORT:Entire data in the data load is fully sorted and written to carbondata files. Data loading performance would get reduced as the entire data needs to be sorted.But the query performance increases significantly due to very less fals
 e positives and concurrency is also improved. <strong>NOTE:</strong> when BATCH_SORT is configured, it is recommended to keep <em><strong>carbon.load.batch.sort.size.inmb</strong></em> &gt; <em><strong>carbon.blockletgroup.size.in.mb</strong></em>
-</td>
-</tr>
-<tr>
-<td>carbon.load.batch.sort.size.inmb</td>
-<td>0</td>
-<td>When  <em><strong>carbon.load.sort.scope</strong></em> is configured as <em><strong>BATCH_SORT</strong></em>, this configuration needs to be added to specify the batch size for sorting and writing to carbondata files. <strong>NOTE:</strong> It is recommended to keep the value around 45% of <em><strong>carbon.sort.storage.inmemory.size.inmb</strong></em> to avoid spill to disk. Also it is recommended to keep the value higher than <em><strong>carbon.blockletgroup.size.in.mb</strong></em>. Refer to <em>carbon.load.sort.scope</em> for more information on sort options and the advantages/disadvantages of each option.</td>
+<td>CarbonData supports Optimized data loading by relying on a dictionary server. Dictionary server helps to maintain dictionary values independent of the data loading and there by avoids reading the same input data multiples times. This configuration determines the number of concurrent dictionary generation or request that needs to be served by the dictionary server. <strong>NOTE:</strong> This configuration takes effect when <em><strong>carbon.options.single.pass</strong></em> is configured as true. Please refer to <em>carbon.options.single.pass</em>to understand how dictionary server optimizes data loading.</td>
 </tr>
 <tr>
 <td>carbon.dictionary.server.port</td>
 <td>2030</td>
-<td>Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.Single pass loading can be enabled using the option <em><strong>carbon.options.single.pass</strong></em>. When this option is specified, a dictionary server will be internally started to handle the dictionary generation and query requests. This configuration specifies the port on which the server need to listen for incoming requests.Port value ranges between 0-65535</td>
-</tr>
-<tr>
-<td>carbon.merge.sort.prefetch</td>
-<td>true</td>
-<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These intermediate temp files will have to be sorted using merge sort before writing into CarbonData format. This configuration enables pre fetching of data from these temp files in order to optimize IO and speed up data loading process.</td>
-</tr>
-<tr>
-<td>carbon.loading.prefetch</td>
-<td>false</td>
-<td>CarbonData uses univocity parser to read csv files. This configuration is used to inform the parser whether it can prefetch the data from csv files to speed up the reading.<strong>NOTE:</strong> Enabling prefetch improves the data loading performance, but needs higher memory to keep more records which are read ahead from disk.</td>
-</tr>
-<tr>
-<td>carbon.prefetch.buffersize</td>
-<td>1000</td>
-<td>When the configuration <em><strong>carbon.merge.sort.prefetch</strong></em> is configured to true, we need to set the number of records that can be prefetched. This configuration is used specify the number of records to be prefetched.**NOTE: **Configuring more number of records to be prefetched increases memory footprint as more records will have to be kept in memory.</td>
-</tr>
-<tr>
-<td>load_min_size_inmb</td>
-<td>256</td>
-<td>This configuration is used along with <em><strong>carbon.load.min.size.enabled</strong></em>. This determines the minimum size of input files to be considered for distribution among executors while data loading.<strong>NOTE:</strong> Refer to <em><strong>carbon.load.min.size.enabled</strong></em> for understanding when this configuration needs to be used and its advantages and disadvantages.</td>
-</tr>
-<tr>
-<td>carbon.load.sortmemory.spill.percentage</td>
-<td>0</td>
-<td>During data loading, some data pages are kept in memory upto memory configured in <em><strong>carbon.sort.storage.inmemory.size.inmb</strong></em> beyond which they are spilled to disk as intermediate temporary sort files. This configuration determines after what percentage data needs to be spilled to disk. <strong>NOTE:</strong> Without this configuration, when the data pages occupy upto configured memory, new data pages would be dumped to disk and old pages are still maintained in disk.</td>
+<td>Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary. Single pass loading can be enabled using the option <em><strong>carbon.options.single.pass</strong></em>. When this option is specified, a dictionary server will be internally started to handle the dictionary generation and query requests. This configuration specifies the port on which the server need to listen for incoming requests. Port value ranges between 0-65535</td>
 </tr>
 <tr>
 <td>carbon.load.directWriteToStorePath.enabled</td>
@@ -537,11 +532,6 @@
 <td>Based on the business scenarios, some columns might need to be loaded with null values. As null value cannot be written in csv files, some special characters might be adopted to specify null values. This configuration can be used to specify the null values format in the data being loaded.</td>
 </tr>
 <tr>
-<td>carbon.sort.storage.inmemory.size.inmb</td>
-<td>512</td>
-<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. When <em><strong>enable.unsafe.sort</strong></em> configuration is enabled, instead of using <em><strong>carbon.sort.size</strong></em> which is based on rows count, size occupied in memory is used to determine when to flush data pages to intermediate temp files. This configuration determines the memory to be used for storing data pages in memory. <strong>NOTE:</strong> Configuring a higher value ensures more data is maintained in memory and hence increases data loading performance due to reduced or no IO.Based on the memory availability in the nodes of the cluster, configure the values accordingly.</td>
-</tr>
-<tr>
 <td>carbon.column.compressor</td>
 <td>snappy</td>
 <td>CarbonData will compress the column values using the compressor specified by this configuration. Currently CarbonData supports 'snappy' and 'zstd' compressors.</td>
@@ -572,7 +562,7 @@
 <tr>
 <td>carbon.compaction.level.threshold</td>
 <td>4, 3</td>
-<td>Each CarbonData load will create one segment, if every load is small in size it will generate many small file over a period of time impacting the query performance. This configuration is for minor compaction which decides how many segments to be merged. Configuration is of the form (x,y). Compaction will be triggered for every x segments and form a single level 1 compacted segment. When the number of compacted level 1 segments reach y, compaction will be triggered again to merge them to form a single level 2 segment. For example: If it is set as 2, 3 then minor compaction will be triggered for every 2 segments. 3 is the number of level 1 compacted segments which is further compacted to new segment.<strong>NOTE:</strong> When <em><strong>carbon.enable.auto.load.merge</strong></em> is <strong>true</strong>, configuring higher values cause overall data loading time to increase as compaction will be triggered after data loading is complete but status is not returned till compaction 
 is complete. But compacting more number of segments can increase query performance.Hence optimal values needs to be configured based on the business scenario. Valid values are between 0 to 100.</td>
+<td>Each CarbonData load will create one segment, if every load is small in size it will generate many small file over a period of time impacting the query performance. This configuration is for minor compaction which decides how many segments to be merged. Configuration is of the form (x,y). Compaction will be triggered for every x segments and form a single level 1 compacted segment. When the number of compacted level 1 segments reach y, compaction will be triggered again to merge them to form a single level 2 segment. For example: If it is set as 2, 3 then minor compaction will be triggered for every 2 segments. 3 is the number of level 1 compacted segments which is further compacted to new segment.<strong>NOTE:</strong> When <em><strong>carbon.enable.auto.load.merge</strong></em> is <strong>true</strong>, configuring higher values cause overall data loading time to increase as compaction will be triggered after data loading is complete but status is not returned till compaction 
 is complete. But compacting more number of segments can increase query performance. Hence optimal values needs to be configured based on the business scenario. Valid values are between 0 to 100.</td>
 </tr>
 <tr>
 <td>carbon.major.compaction.size</td>
@@ -582,38 +572,38 @@
 <tr>
 <td>carbon.horizontal.compaction.enable</td>
 <td>true</td>
-<td>CarbonData supports DELETE/UPDATE functionality by creating delta data files for existing carbondata files. These delta files would grow as more number of DELETE/UPDATE operations are performed.Compaction of these delta files are termed as horizontal compaction. This configuration is used to turn ON/OFF horizontal compaction. After every DELETE and UPDATE statement, horizontal compaction may occur in case the delta (DELETE/ UPDATE) files becomes more than specified threshold.**NOTE: **Having many delta files will reduce the query performance as scan has to happen on all these files before the final state of data can be decided.Hence it is advisable to keep horizontal compaction enabled and configure reasonable values to <em><strong>carbon.horizontal.UPDATE.compaction.threshold</strong></em> and <em><strong>carbon.horizontal.DELETE.compaction.threshold</strong></em>
+<td>CarbonData supports DELETE/UPDATE functionality by creating delta data files for existing carbondata files. These delta files would grow as more number of DELETE/UPDATE operations are performed. Compaction of these delta files are termed as horizontal compaction. This configuration is used to turn ON/OFF horizontal compaction. After every DELETE and UPDATE statement, horizontal compaction may occur in case the delta (DELETE/ UPDATE) files becomes more than specified threshold.**NOTE: **Having many delta files will reduce the query performance as scan has to happen on all these files before the final state of data can be decided. Hence it is advisable to keep horizontal compaction enabled and configure reasonable values to <em><strong>carbon.horizontal.UPDATE.compaction.threshold</strong></em> and <em><strong>carbon.horizontal.DELETE.compaction.threshold</strong></em>
 </td>
 </tr>
 <tr>
 <td>carbon.horizontal.update.compaction.threshold</td>
 <td>1</td>
-<td>This configuration specifies the threshold limit on number of UPDATE delta files within a segment. In case the number of delta files goes beyond the threshold, the UPDATE delta files within the segment becomes eligible for horizontal compaction and are compacted into single UPDATE delta file.Values range between 1 to 10000.</td>
+<td>This configuration specifies the threshold limit on number of UPDATE delta files within a segment. In case the number of delta files goes beyond the threshold, the UPDATE delta files within the segment becomes eligible for horizontal compaction and are compacted into single UPDATE delta file. Values range between 1 to 10000.</td>
 </tr>
 <tr>
 <td>carbon.horizontal.delete.compaction.threshold</td>
 <td>1</td>
-<td>This configuration specifies the threshold limit on number of DELETE delta files within a block of a segment. In case the number of delta files goes beyond the threshold, the DELETE delta files for the particular block of the segment becomes eligible for horizontal compaction and are compacted into single DELETE delta file.Values range between 1 to 10000.</td>
+<td>This configuration specifies the threshold limit on number of DELETE delta files within a block of a segment. In case the number of delta files goes beyond the threshold, the DELETE delta files for the particular block of the segment becomes eligible for horizontal compaction and are compacted into single DELETE delta file. Values range between 1 to 10000.</td>
 </tr>
 <tr>
 <td>carbon.update.segment.parallelism</td>
 <td>1</td>
-<td>CarbonData processes the UPDATE operations by grouping records belonging to a segment into a single executor task. When the amount of data to be updated is more, this behavior causes problems like restarting of executor due to low memory and data-spill related errors. This property specifies the parallelism for each segment during update.<strong>NOTE:</strong> It is recommended to set this value to a multiple of the number of executors for balance.Values range between 1 to 1000.</td>
+<td>CarbonData processes the UPDATE operations by grouping records belonging to a segment into a single executor task. When the amount of data to be updated is more, this behavior causes problems like restarting of executor due to low memory and data-spill related errors. This property specifies the parallelism for each segment during update.<strong>NOTE:</strong> It is recommended to set this value to a multiple of the number of executors for balance. Values range between 1 to 1000.</td>
 </tr>
 <tr>
 <td>carbon.numberof.preserve.segments</td>
 <td>0</td>
-<td>If the user wants to preserve some number of segments from being compacted then he can set this configuration. Example: carbon.numberof.preserve.segments = 2 then 2 latest segments will always be excluded from the compaction. No segments will be preserved by default.<strong>NOTE:</strong> This configuration is useful when the chances of input data can be wrong due to environment scenarios.Preserving some of the latest segments from being compacted can help to easily delete the wrongly loaded segments.Once compacted,it becomes more difficult to determine the exact data to be deleted(except when data is incrementing according to time)</td>
+<td>If the user wants to preserve some number of segments from being compacted then he can set this configuration. Example: carbon.numberof.preserve.segments = 2 then 2 latest segments will always be excluded from the compaction. No segments will be preserved by default.<strong>NOTE:</strong> This configuration is useful when the chances of input data can be wrong due to environment scenarios. Preserving some of the latest segments from being compacted can help to easily delete the wrongly loaded segments. Once compacted,it becomes more difficult to determine the exact data to be deleted(except when data is incrementing according to time)</td>
 </tr>
 <tr>
 <td>carbon.allowed.compaction.days</td>
 <td>0</td>
-<td>This configuration is used to control on the number of recent segments that needs to be compacted, ignoring the older ones. This configuration is in days.For Example: If the configuration is 2, then the segments which are loaded in the time frame of past 2 days only will get merged. Segments which are loaded earlier than 2 days will not be merged. This configuration is disabled by default.<strong>NOTE:</strong> This configuration is useful when a bulk of history data is loaded into the carbondata.Query on this data is less frequent.In such cases involving these segments also into compaction will affect the resource consumption, increases overall compaction time.</td>
+<td>This configuration is used to control on the number of recent segments that needs to be compacted, ignoring the older ones. This configuration is in days. For Example: If the configuration is 2, then the segments which are loaded in the time frame of past 2 days only will get merged. Segments which are loaded earlier than 2 days will not be merged. This configuration is disabled by default.<strong>NOTE:</strong> This configuration is useful when a bulk of history data is loaded into the carbondata. Query on this data is less frequent. In such cases involving these segments also into compaction will affect the resource consumption, increases overall compaction time.</td>
 </tr>
 <tr>
 <td>carbon.enable.auto.load.merge</td>
 <td>false</td>
-<td>Compaction can be automatically triggered once data load completes. This ensures that the segments are merged in time and thus query times does not increase with increase in segments. This configuration enables to do compaction along with data loading.**NOTE: **Compaction will be triggered once the data load completes.But the status of data load wait till the compaction is completed.Hence it might look like data loading time has increased, but thats not the case.Moreover failure of compaction will not affect the data loading status.If data load had completed successfully, the status would be updated and segments are committed.However, failure while data loading, will not trigger compaction and error is returned immediately.</td>
+<td>Compaction can be automatically triggered once data load completes. This ensures that the segments are merged in time and thus query times does not increase with increase in segments. This configuration enables to do compaction along with data loading.**NOTE: **Compaction will be triggered once the data load completes. But the status of data load wait till the compaction is completed. Hence it might look like data loading time has increased, but thats not the case. Moreover failure of compaction will not affect the data loading status. If data load had completed successfully, the status would be updated and segments are committed. However, failure while data loading, will not trigger compaction and error is returned immediately.</td>
 </tr>
 <tr>
 <td>carbon.enable.page.level.reader.in.compaction</td>
@@ -628,12 +618,12 @@
 <tr>
 <td>carbon.compaction.prefetch.enable</td>
 <td>false</td>
-<td>Compaction operation is similar to Query + data load where in data from qualifying segments are queried and data loading performed to generate a new single segment. This configuration determines whether to query ahead data from segments and feed it for data loading. **NOTE: **This configuration is disabled by default as it needs extra resources for querying extra data.Based on the memory availability on the cluster, user can enable it to improve compaction performance.</td>
+<td>Compaction operation is similar to Query + data load where in data from qualifying segments are queried and data loading performed to generate a new single segment. This configuration determines whether to query ahead data from segments and feed it for data loading. **NOTE: **This configuration is disabled by default as it needs extra resources for querying extra data. Based on the memory availability on the cluster, user can enable it to improve compaction performance.</td>
 </tr>
 <tr>
 <td>carbon.merge.index.in.segment</td>
 <td>true</td>
-<td>Each CarbonData file has a companion CarbonIndex file which maintains the metadata about the data. These CarbonIndex files are read and loaded into driver and is used subsequently for pruning of data during queries. These CarbonIndex files are very small in size(few KB) and are many.Reading many small files from HDFS is not efficient and leads to slow IO performance.Hence these CarbonIndex files belonging to a segment can be combined into  a single file and read once there by increasing the IO throughput. This configuration enables to merge all the CarbonIndex files into a single MergeIndex file upon data loading completion.<strong>NOTE:</strong> Reading a single big file is more efficient in HDFS and IO throughput is very high.Due to this the time needed to load the index files into memory when query is received for the first time on that table is significantly reduced and there by significantly reduces the delay in serving the first query.</td>
+<td>Each CarbonData file has a companion CarbonIndex file which maintains the metadata about the data. These CarbonIndex files are read and loaded into driver and is used subsequently for pruning of data during queries. These CarbonIndex files are very small in size(few KB) and are many. Reading many small files from HDFS is not efficient and leads to slow IO performance. Hence these CarbonIndex files belonging to a segment can be combined into  a single file and read once there by increasing the IO throughput. This configuration enables to merge all the CarbonIndex files into a single MergeIndex file upon data loading completion.<strong>NOTE:</strong> Reading a single big file is more efficient in HDFS and IO throughput is very high. Due to this the time needed to load the index files into memory when query is received for the first time on that table is significantly reduced and there by significantly reduces the delay in serving the first query.</td>
 </tr>
 </tbody>
 </table>
@@ -651,12 +641,12 @@
 <tr>
 <td>carbon.max.driver.lru.cache.size</td>
 <td>-1</td>
-<td>Maximum memory <strong>(in MB)</strong> upto which the driver process can cache the data (BTree and dictionary values). Beyond this, least recently used data will be removed from cache before loading new set of values.Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. <strong>NOTE:</strong> Minimum number of entries that needs to be removed from cache in order to load the new set of data is determined and unloaded.ie.,for example if 3 cache entries qualify for pre-emption, out of these, those entries that free up more cache memory is removed prior to others. Please refer <a href="./faq.html#how-to-check-lru-cache-memory-footprint">FAQs</a> for checking LRU cache memory footprint.</td>
+<td>Maximum memory <strong>(in MB)</strong> upto which the driver process can cache the data (BTree and dictionary values). Beyond this, least recently used data will be removed from cache before loading new set of values. Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. <strong>NOTE:</strong> Minimum number of entries that needs to be removed from cache in order to load the new set of data is determined and unloaded.ie.,for example if 3 cache entries qualify for pre-emption, out of these, those entries that free up more cache memory is removed prior to others. Please refer <a href="./faq.html#how-to-check-lru-cache-memory-footprint">FAQs</a> for checking LRU cache memory footprint.</td>
 </tr>
 <tr>
 <td>carbon.max.executor.lru.cache.size</td>
 <td>-1</td>
-<td>Maximum memory <strong>(in MB)</strong> upto which the executor process can cache the data (BTree and reverse dictionary values).Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. <strong>NOTE:</strong> If this parameter is not configured, then the value of <em><strong>carbon.max.driver.lru.cache.size</strong></em> will be used.</td>
+<td>Maximum memory <strong>(in MB)</strong> upto which the executor process can cache the data (BTree and reverse dictionary values). Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. <strong>NOTE:</strong> If this parameter is not configured, then the value of <em><strong>carbon.max.driver.lru.cache.size</strong></em> will be used.</td>
 </tr>
 <tr>
 <td>max.query.execution.time</td>
@@ -669,12 +659,12 @@
 <td>CarbonData maintains the metadata which enables to prune unnecessary files from being scanned as per the query conditions. To achieve pruning, Min,Max of each column is maintined.Based on the filter condition in the query, certain data can be skipped from scanning by matching the filter value against the min,max values of the column(s) present in that carbondata file. This pruning enhances query performance significantly.</td>
 </tr>
 <tr>
-<td>carbon.dynamicallocation.schedulertimeout</td>
+<td>carbon.dynamical.location.scheduler.timeout</td>
 <td>5</td>
 <td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. To determine the number of tasks that can be scheduled, knowing the count of active executors is necessary. When dynamic allocation is enabled on a YARN based spark cluster, executor processes are shutdown if no request is received for a particular amount of time. The executors are brought up when the requet is received again. This configuration specifies the maximum time (unit in seconds) the carbon scheduler can wait for executor to be active. Minimum value is 5 sec and maximum value is 15 sec.**NOTE: **Waiting for longer time leads to slow query response time.Moreover it might be possible that YARN is not able to start the executors and waiting is not beneficial.</td>
 </tr>
 <tr>
-<td>carbon.scheduler.minregisteredresourcesratio</td>
+<td>carbon.scheduler.min.registered.resources.ratio</td>
 <td>0.8</td>
 <td>Specifies the minimum resource (executor) ratio needed for starting the block distribution. The default value is 0.8, which indicates 80% of the requested resource is allocated for starting block distribution. The minimum value is 0.1 min and the maximum value is 1.0.</td>
 </tr>
@@ -707,7 +697,7 @@
 <td>carbon.search.worker.workload.limit</td>
 <td>10 * <em>carbon.search.scan.thread</em>
 </td>
-<td>Maximum number of active requests that can be sent to a worker.Beyond which the request needs to be rescheduled for later time or to a different worker.</td>
+<td>Maximum number of active requests that can be sent to a worker. Beyond which the request needs to be rescheduled for later time or to a different worker.</td>
 </tr>
 <tr>
 <td>carbon.detail.batch.size</td>
@@ -722,17 +712,17 @@
 <tr>
 <td>carbon.task.distribution</td>
 <td>block</td>
-<td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData.Each of these task distribution suggestions has its own advantages and disadvantages.Based on the customer use case, appropriate task distribution can be configured.<strong>block</strong>: Setting this value will launch one task per block. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>custom</strong>: Setting this value will group the blocks and distribute it uniformly to the available resources in the cluster. This enhances the query performance but not suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>blocklet</strong>: Setting this value will launch one task per blocklet. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>merge_smal
 l_files</strong>: Setting this value will merge all the small carbondata files upto a bigger size configured by <em><strong>spark.sql.files.maxPartitionBytes</strong></em> (128 MB is the default value,it is configurable) during querying. The small carbondata files are combined to a map task to reduce the number of read task. This enhances the performance.</td>
+<td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. Each of these task distribution suggestions has its own advantages and disadvantages. Based on the customer use case, appropriate task distribution can be configured.<strong>block</strong>: Setting this value will launch one task per block. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>custom</strong>: Setting this value will group the blocks and distribute it uniformly to the available resources in the cluster. This enhances the query performance but not suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>blocklet</strong>: Setting this value will launch one task per blocklet. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>merge_sm
 all_files</strong>: Setting this value will merge all the small carbondata files upto a bigger size configured by <em><strong>spark.sql.files.maxPartitionBytes</strong></em> (128 MB is the default value,it is configurable) during querying. The small carbondata files are combined to a map task to reduce the number of read task. This enhances the performance.</td>
 </tr>
 <tr>
 <td>carbon.custom.block.distribution</td>
 <td>false</td>
-<td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. When this configuration is true, CarbonData would distribute the available blocks to be scanned among the available number of cores.For Example:If there are 10 blocks to be scanned and only 3 tasks can be run(only 3 executor cores available in the cluster), CarbonData would combine blocks as 4,3,3 and give it to 3 tasks to run. <strong>NOTE:</strong> When this configuration is false, as per the <em><strong>carbon.task.distribution</strong></em> configuration, each block/blocklet would be given to each task.</td>
+<td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. When this configuration is true, CarbonData would distribute the available blocks to be scanned among the available number of cores. For Example:If there are 10 blocks to be scanned and only 3 tasks can be run(only 3 executor cores available in the cluster), CarbonData would combine blocks as 4,3,3 and give it to 3 tasks to run. <strong>NOTE:</strong> When this configuration is false, as per the <em><strong>carbon.task.distribution</strong></em> configuration, each block/blocklet would be given to each task.</td>
 </tr>
 <tr>
 <td>enable.query.statistics</td>
 <td>false</td>
-<td>CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made <em><strong>true</strong></em> would log additional query statistics information to more accurately locate the issues being debugged.<strong>NOTE:</strong> Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time.It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all query performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging.</td>
+<td>CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made <em><strong>true</strong></em> would log additional query statistics information to more accurately locate the issues being debugged.<strong>NOTE:</strong> Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time. It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all query performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging.</td>
 </tr>
 <tr>
 <td>enable.unsafe.in.query.processing</td>
@@ -740,14 +730,24 @@
 <td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. This configuration enables to use unsafe functions in CarbonData while scanning the  data during query.</td>
 </tr>
 <tr>
-<td>carbon.query.validate.directqueryondatamap</td>
+<td>carbon.query.validate.direct.query.on.datamap</td>
 <td>true</td>
 <td>CarbonData supports creating pre-aggregate table datamaps as an independent tables. For some debugging purposes, it might be required to directly query from such datamap tables. This configuration allows to query on such datamaps.</td>
 </tr>
 <tr>
+<td>carbon.max.driver.threads.for.block.pruning</td>
+<td>4</td>
+<td>Number of threads used for driver pruning when the carbon files are more than 100k Maximum memory. This configuration can used to set number of threads between 1 to 4.</td>
+</tr>
+<tr>
 <td>carbon.heap.memory.pooling.threshold.bytes</td>
 <td>1048576</td>
-<td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. Using unsafe, memory can be allocated on Java Heap or off heap. This configuration controls the allocation mechanism on Java HEAP.If the heap memory allocations of the given size is greater or equal than this value,it should go through the pooling mechanism.But if set this size to -1, it should not go through the pooling mechanism.Default value is 1048576(1MB, the same as Spark).Value to be specified in bytes.</td>
+<td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. Using unsafe, memory can be allocated on Java Heap or off heap. This configuration controls the allocation mechanism on Java HEAP. If the heap memory allocations of the given size is greater or equal than this value,it should go through the pooling mechanism. But if set this size to -1, it should not go through the pooling mechanism. Default value is 1048576(1MB, the same as Spark). Value to be specified in bytes.</td>
+</tr>
+<tr>
+<td>carbon.push.rowfilters.for.vector</td>
+<td>false</td>
+<td>When enabled complete row filters will be handled by carbon in case of vector. If it is disabled then only page level pruning will be done by carbon and row level filtering will be done by spark for vector. And also there are scan optimizations in carbon to avoid multiple data copies when this parameter is set to false. There is no change in flow for non-vector based queries.</td>
 </tr>
 </tbody>
 </table>
@@ -844,7 +844,7 @@
 <tbody>
 <tr>
 <td>carbon.options.bad.records.logger.enable</td>
-<td>CarbonData can identify the records that are not conformant to schema and isolate them as bad records.Enabling this configuration will make CarbonData to log such bad records.<strong>NOTE:</strong> If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in <em><strong>carbon.bad.records.action</strong></em>.</td>
+<td>CarbonData can identify the records that are not conformant to schema and isolate them as bad records. Enabling this configuration will make CarbonData to log such bad records.<strong>NOTE:</strong> If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in <em><strong>carbon.bad.records.action</strong></em>.</td>
 </tr>
 <tr>
 <td>carbon.options.bad.records.logger.enable</td>
@@ -864,7 +864,7 @@
 </tr>
 <tr>
 <td>carbon.options.single.pass</td>
-<td>Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary. This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE. <strong>NOTE:</strong> Enabling this starts a new dictionary server to handle dictionary generation requests during data loading. Without this option, the input csv files will have to read twice.Once while dictionary generation and persisting to the dictionary files.second when the data loading need to convert the input data into carbondata format.Enabling this optimizes the optimizes to read the input data only once there by reducing IO and hence over all data loading time.If concurrent data loading needs to be supported, consider tuning <em><strong>dictionary.worker.threads</strong></em>.Port on which the dictionary serv
 er need to listen on can be configured using the configuration <em><strong>carbon.dictionary.server.port</strong></em>.</td>
+<td>Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary. This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE. <strong>NOTE:</strong> Enabling this starts a new dictionary server to handle dictionary generation requests during data loading. Without this option, the input csv files will have to read twice. Once while dictionary generation and persisting to the dictionary files.second when the data loading need to convert the input data into carbondata format. Enabling this optimizes the optimizes to read the input data only once there by reducing IO and hence over all data loading time. If concurrent data loading needs to be supported, consider tuning <em><strong>dictionary.worker.threads</strong></em>. Port on which the dictionary 
 server need to listen on can be configured using the configuration <em><strong>carbon.dictionary.server.port</strong></em>.</td>
 </tr>
 <tr>
 <td>carbon.options.bad.record.path</td>
@@ -879,11 +879,11 @@
 <td>Specifies whether to use unsafe sort during data loading. Unsafe sort reduces the garbage collection during data load operation, resulting in better performance.</td>
 </tr>
 <tr>
-<td>carbon.options.dateformat</td>
+<td>carbon.options.date.format</td>
 <td>Specifies the data format of the date columns in the data being loaded</td>
 </tr>
 <tr>
-<td>carbon.options.timestampformat</td>
+<td>carbon.options.timestamp.format</td>
 <td>Specifies the timestamp format of the time stamp columns in the data being loaded</td>
 </tr>
 <tr>
@@ -900,7 +900,7 @@
 </tr>
 <tr>
 <td>carbon.query.directQueryOnDataMap.enabled</td>
-<td>Specifies whether datamap can be queried directly. This is useful for debugging purposes.**NOTE: **Refer to <a href="#query-configuration">Query Configuration</a>#carbon.query.validate.directqueryondatamap for detailed information.</td>
+<td>Specifies whether datamap can be queried directly. This is useful for debugging purposes.**NOTE: **Refer to <a href="#query-configuration">Query Configuration</a>#carbon.query.validate.direct.query.on.datamap for detailed information.</td>
 </tr>
 </tbody>
 </table>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/datamap-developer-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/datamap-developer-guide.html b/src/main/webapp/datamap-developer-guide.html
index f442fe2..286c21d 100644
--- a/src/main/webapp/datamap-developer-guide.html
+++ b/src/main/webapp/datamap-developer-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/webapp/datamap-management.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/datamap-management.html b/src/main/webapp/datamap-management.html
index f5f9678..5dc2b33 100644
--- a/src/main/webapp/datamap-management.html
+++ b/src/main/webapp/datamap-management.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>


[7/8] carbondata-site git commit: Added 1.5.1 version information

Posted by ra...@apache.org.
http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/configuration-parameters.html
----------------------------------------------------------------------
diff --git a/content/configuration-parameters.html b/content/configuration-parameters.html
index 5c334eb..5cc7a45 100644
--- a/content/configuration-parameters.html
+++ b/content/configuration-parameters.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -220,7 +220,7 @@
                                     <div>
 <h1>
 <a id="configuring-carbondata" class="anchor" href="#configuring-carbondata" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Configuring CarbonData</h1>
-<p>This guide explains the configurations that can be used to tune CarbonData to achieve better performance.Most of the properties that control the internal settings have reasonable default values. They are listed along with the properties along with explanation.</p>
+<p>This guide explains the configurations that can be used to tune CarbonData to achieve better performance. Most of the properties that control the internal settings have reasonable default values. They are listed along with the properties along with explanation.</p>
 <ul>
 <li><a href="#system-configuration">System Configuration</a></li>
 <li><a href="#data-loading-configuration">Data Loading Configuration</a></li>
@@ -244,7 +244,7 @@
 <tr>
 <td>carbon.storelocation</td>
 <td>spark.sql.warehouse.dir property value</td>
-<td>Location where CarbonData will create the store, and write the data in its custom format. If not specified,the path defaults to spark.sql.warehouse.dir property. <strong>NOTE:</strong> Store location should be in HDFS.</td>
+<td>Location where CarbonData will create the store, and write the data in its custom format. If not specified,the path defaults to spark.sql.warehouse.dir property. <strong>NOTE:</strong> Store location should be in HDFS or S3.</td>
 </tr>
 <tr>
 <td>carbon.ddl.base.hdfs.url</td>
@@ -269,17 +269,17 @@
 <tr>
 <td>carbon.query.show.datamaps</td>
 <td>true</td>
-<td>CarbonData stores datamaps as independent tables so as to allow independent maintenance to some extent. When this property is true,which is by default, show tables command will list all the tables including datatmaps(eg: Preaggregate table), else datamaps will be excluded from the table list.<strong>NOTE:</strong>  It is generally not required for the user to do any maintenance operations on these tables and hence not required to be seen.But it is shown by default so that user or admin can get clear understanding of the system for capacity planning.</td>
+<td>CarbonData stores datamaps as independent tables so as to allow independent maintenance to some extent. When this property is true,which is by default, show tables command will list all the tables including datatmaps(eg: Preaggregate table), else datamaps will be excluded from the table list.<strong>NOTE:</strong>  It is generally not required for the user to do any maintenance operations on these tables and hence not required to be seen. But it is shown by default so that user or admin can get clear understanding of the system for capacity planning.</td>
 </tr>
 <tr>
 <td>carbon.segment.lock.files.preserve.hours</td>
 <td>48</td>
-<td>In order to support parallel data loading onto the same table, CarbonData sequences(locks) at the granularity of segments.Operations affecting the segment(like IUD, alter) are blocked from parallel operations. This property value indicates the number of hours the segment lock files will be preserved after dataload. These lock files will be deleted with the clean command after the configured number of hours.</td>
+<td>In order to support parallel data loading onto the same table, CarbonData sequences(locks) at the granularity of segments. Operations affecting the segment(like IUD, alter) are blocked from parallel operations. This property value indicates the number of hours the segment lock files will be preserved after dataload. These lock files will be deleted with the clean command after the configured number of hours.</td>
 </tr>
 <tr>
 <td>carbon.timestamp.format</td>
 <td>yyyy-MM-dd HH:mm:ss</td>
-<td>CarbonData can understand data of timestamp type and process it in special manner.It can be so that the format of Timestamp data is different from that understood by CarbonData by default. This configuration allows users to specify the format of Timestamp in their data.</td>
+<td>CarbonData can understand data of timestamp type and process it in special manner. It can be so that the format of Timestamp data is different from that understood by CarbonData by default. This configuration allows users to specify the format of Timestamp in their data.</td>
 </tr>
 <tr>
 <td>carbon.lock.type</td>
@@ -292,14 +292,19 @@
 <td>This configuration specifies the path where lock files have to be created. Recommended to configure zookeeper lock type or configure HDFS lock path(to this property) in case of S3 file system as locking is not feasible on S3.</td>
 </tr>
 <tr>
+<td>enable.offheap.sort</td>
+<td>true</td>
+<td>Whether carbondata will use offheap or onheap memory. By default, the value is true and carbondata will use the property value from <em>carbon.unsafe.working.memory.in.mb</em> or <em>carbon.unsafe.driver.working.memory.in.mb</em> as the amount of memory; if it is false, carbondata will use the minimum value between the configured amount of unsafe memory and the 60% of JVM Heap Memory as the amount of memory.</td>
+</tr>
+<tr>
 <td>carbon.unsafe.working.memory.in.mb</td>
 <td>512</td>
-<td>CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. The Minimum value recommeded is 512MB. Any value below this is reset to default value of 512MB. <strong>NOTE:</strong> The below formulas explain how to arrive at the off-heap size required.Memory Required For Data Loading:(<em>carbon.number.of.cores.while.loading</em>) * (Number of tables to load in parallel) * (<em>offheap.sort.chunk.size.inmb</em> + <em>carbon.blockletgroup.size.in.mb</em> + <em>carbon.blockletgroup.size.in.mb</em>/3.5 ). Memory required for Query:SPARK_EXECUTOR_INSTANCES * (<em>carbon.blockletgroup.size.in.mb</em> + <em>carbon.blockletgroup.size.in.mb</em> * 3.5) * spark.executor.cores</td>
+<td>CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. The Minimum value recommeded is 512MB. Any value below this is reset to default value of 512MB. <strong>NOTE:</strong> The below formulas explain how to arrive at the off-heap size required.Memory Required For Data Loading per executor: (<em>carbon.number.of.cores.while.loading</em>) * (Number of tables to load in parallel) * (<em>offheap.sort.chunk.size.inmb</em> + <em>carbon.blockletgroup.size.in.mb</em> + <em>carbon.blockletgroup.size.in.mb</em>/3.5 ). Memory required for Query per executor: (<em>carbon.blockletgroup.size.in.mb</em> + <em>carbon.blockletgroup.size.in.mb</em> * 3.5) * spark.executor.cores</td>
 </tr>
 <tr>
 <td>carbon.unsafe.driver.working.memory.in.mb</td>
-<td>60% of JVM Heap Memory</td>
-<td>CarbonData supports storing data in unsafe on-heap memory in driver for certain operations like insert into, query for loading datamap cache. The Minimum value recommended is 512MB.</td>
+<td>(none)</td>
+<td>CarbonData supports storing data in unsafe on-heap memory in driver for certain operations like insert into, query for loading datamap cache. The Minimum value recommended is 512MB. If this configuration is not set, carbondata will use the value of <code>carbon.unsafe.working.memory.in.mb</code>.</td>
 </tr>
 <tr>
 <td>carbon.update.sync.folder</td>
@@ -309,12 +314,12 @@
 <tr>
 <td>carbon.invisible.segments.preserve.count</td>
 <td>200</td>
-<td>CarbonData maintains each data load entry in tablestatus file. The entries from this file are not deleted for those segments that are compacted or dropped, but are made invisible. If the number of data loads are very high, the size and number of entries in tablestatus file can become too many causing unnecessary reading of all data. This configuration specifies the number of segment entries to be maintained afte they are compacted or dropped.Beyond this, the entries are moved to a separate history tablestatus file. <strong>NOTE:</strong> The entries in tablestatus file help to identify the operations performed on CarbonData table and is also used for checkpointing during various data manupulation operations. This is similar to AUDIT file maintaining all the operations and its status.Hence the entries are never deleted but moved to a separate history file.</td>
+<td>CarbonData maintains each data load entry in tablestatus file. The entries from this file are not deleted for those segments that are compacted or dropped, but are made invisible. If the number of data loads are very high, the size and number of entries in tablestatus file can become too many causing unnecessary reading of all data. This configuration specifies the number of segment entries to be maintained afte they are compacted or dropped. Beyond this, the entries are moved to a separate history tablestatus file. <strong>NOTE:</strong> The entries in tablestatus file help to identify the operations performed on CarbonData table and is also used for checkpointing during various data manupulation operations. This is similar to AUDIT file maintaining all the operations and its status. Hence the entries are never deleted but moved to a separate history file.</td>
 </tr>
 <tr>
 <td>carbon.lock.retries</td>
 <td>3</td>
-<td>CarbonData ensures consistency of operations by blocking certain operations from running in parallel. In order to block the operations from running in parallel, lock is obtained on the table. This configuration specifies the maximum number of retries to obtain the lock for any operations other than load. <strong>NOTE:</strong> Data manupulation operations like Compaction,UPDATE,DELETE  or LOADING,UPDATE,DELETE are not allowed to run in parallel.How ever data loading can happen in parallel to compaction.</td>
+<td>CarbonData ensures consistency of operations by blocking certain operations from running in parallel. In order to block the operations from running in parallel, lock is obtained on the table. This configuration specifies the maximum number of retries to obtain the lock for any operations other than load. <strong>NOTE:</strong> Data manupulation operations like Compaction,UPDATE,DELETE  or LOADING,UPDATE,DELETE are not allowed to run in parallel. How ever data loading can happen in parallel to compaction.</td>
 </tr>
 <tr>
 <td>carbon.lock.retry.timeout.sec</td>
@@ -335,14 +340,55 @@
 </thead>
 <tbody>
 <tr>
+<td>carbon.concurrent.lock.retries</td>
+<td>100</td>
+<td>CarbonData supports concurrent data loading onto same table. To ensure the loading status is correctly updated into the system,locks are used to sequence the status updation step. This configuration specifies the maximum number of retries to obtain the lock for updating the load status. <strong>NOTE:</strong> This value is high as more number of concurrent loading happens,more the chances of not able to obtain the lock when tried. Adjust this value according to the number of concurrent loading to be supported by the system.</td>
+</tr>
+<tr>
+<td>carbon.concurrent.lock.retry.timeout.sec</td>
+<td>1</td>
+<td>Specifies the interval between the retries to obtain the lock for concurrent operations. <strong>NOTE:</strong> Refer to <em><strong>carbon.concurrent.lock.retries</strong></em> for understanding why CarbonData uses locks during data loading operations.</td>
+</tr>
+<tr>
+<td>carbon.csv.read.buffersize.byte</td>
+<td>1048576</td>
+<td>CarbonData uses Hadoop InputFormat to read the csv files. This configuration value is used to pass buffer size as input for the Hadoop MR job when reading the csv files. This value is configured in bytes. <strong>NOTE:</strong> Refer to <em><strong>org.apache.hadoop.mapreduce. InputFormat</strong></em> documentation for additional information.</td>
+</tr>
+<tr>
+<td>carbon.loading.prefetch</td>
+<td>false</td>
+<td>CarbonData uses univocity parser to read csv files. This configuration is used to inform the parser whether it can prefetch the data from csv files to speed up the reading.<strong>NOTE:</strong> Enabling prefetch improves the data loading performance, but needs higher memory to keep more records which are read ahead from disk.</td>
+</tr>
+<tr>
+<td>carbon.skip.empty.line</td>
+<td>false</td>
+<td>The csv files givent to CarbonData for loading can contain empty lines. Based on the business scenario, this empty line might have to be ignored or needs to be treated as NULL value for all columns. In order to define this business behavior, this configuration is provided.<strong>NOTE:</strong> In order to consider NULL values for non string columns and continue with data load, <em><strong>carbon.bad.records.action</strong></em> need to be set to <strong>FORCE</strong>;else data load will be failed as bad records encountered.</td>
+</tr>
+<tr>
 <td>carbon.number.of.cores.while.loading</td>
 <td>2</td>
 <td>Number of cores to be used while loading data. This also determines the number of threads to be used to read the input files (csv) in parallel.<strong>NOTE:</strong> This configured value is used in every data loading step to parallelize the operations. Configuring a higher value can lead to increased early thread pre-emption by OS and there by reduce the overall performance.</td>
 </tr>
 <tr>
-<td>carbon.sort.size</td>
-<td>100000</td>
-<td>Number of records to hold in memory to sort and write intermediate temp files.<strong>NOTE:</strong> Memory required for data loading increases with increase in configured value as each thread would cache configured number of records.</td>
+<td>enable.unsafe.sort</td>
+<td>true</td>
+<td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. This configuration enables to use unsafe functions in CarbonData. <strong>NOTE:</strong> For operations like data loading, which generates more short lived Java objects, Java GC can be a bottle neck. Using unsafe can overcome the GC overhead and improve the overall performance.</td>
+</tr>
+<tr>
+<td>enable.offheap.sort</td>
+<td>true</td>
+<td>CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. This configuration enables using off-heap memory for sorting of data during data loading.<strong>NOTE:</strong>  <em><strong>enable.unsafe.sort</strong></em> configuration needs to be configured to true for using off-heap</td>
+</tr>
+<tr>
+<td>carbon.load.sort.scope</td>
+<td>LOCAL_SORT</td>
+<td>CarbonData can support various sorting options to match the balance between load and query performance. LOCAL_SORT:All the data given to an executor in the single load is fully sorted and written to carbondata files. Data loading performance is reduced a little as the entire data needs to be sorted in the executor. BATCH_SORT:Sorts the data in batches of configured size and writes to carbondata files. Data loading performance increases as the entire data need not be sorted. But query performance will get reduced due to false positives in block pruning and also due to more number of carbondata files written. Due to more number of carbondata files, if identified blocks &gt; cluster parallelism, query performance and concurrency will get reduced. GLOBAL SORT:Entire data in the data load is fully sorted and written to carbondata files. Data loading performance would get reduced as the entire data needs to be sorted. But the query performance increases significantly due to very less 
 false positives and concurrency is also improved. <strong>NOTE:</strong> when BATCH_SORT is configured, it is recommended to keep <em><strong>carbon.load.batch.sort.size.inmb</strong></em> &gt; <em><strong>carbon.blockletgroup.size.in.mb</strong></em>
+</td>
+</tr>
+<tr>
+<td>carbon.load.batch.sort.size.inmb</td>
+<td>0</td>
+<td>When  <em><strong>carbon.load.sort.scope</strong></em> is configured as <em><strong>BATCH_SORT</strong></em>, this configuration needs to be added to specify the batch size for sorting and writing to carbondata files. <strong>NOTE:</strong> It is recommended to keep the value around 45% of <em><strong>carbon.sort.storage.inmemory.size.inmb</strong></em> to avoid spill to disk. Also it is recommended to keep the value higher than <em><strong>carbon.blockletgroup.size.in.mb</strong></em>. Refer to <em>carbon.load.sort.scope</em> for more information on sort options and the advantages/disadvantages of each option.</td>
 </tr>
 <tr>
 <td>carbon.global.sort.rdd.storage.level</td>
@@ -352,12 +398,17 @@
 <tr>
 <td>carbon.load.global.sort.partitions</td>
 <td>0</td>
-<td>The Number of partitions to use when shuffling data for sort. Default value 0 means to use same number of map tasks as reduce tasks.<strong>NOTE:</strong> In general, it is recommended to have 2-3 tasks per CPU core in your cluster.</td>
+<td>The number of partitions to use when shuffling data for global sort. Default value 0 means to use same number of map tasks as reduce tasks. <strong>NOTE:</strong> In general, it is recommended to have 2-3 tasks per CPU core in your cluster.</td>
+</tr>
+<tr>
+<td>carbon.sort.size</td>
+<td>100000</td>
+<td>Number of records to hold in memory to sort and write intermediate sort temp files. <strong>NOTE:</strong> Memory required for data loading will increase if you turn this value bigger. Besides each thread will cache this amout of records. The number of threads is configured by <em>carbon.number.of.cores.while.loading</em>.</td>
 </tr>
 <tr>
 <td>carbon.options.bad.records.logger.enable</td>
 <td>false</td>
-<td>CarbonData can identify the records that are not conformant to schema and isolate them as bad records. Enabling this configuration will make CarbonData to log such bad records.<strong>NOTE:</strong> If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in <em><strong>carbon.bad.records.action</strong></em>.</td>
+<td>CarbonData can identify the records that are not conformant to schema and isolate them as bad records. Enabling this configuration will make CarbonData to log such bad records. <strong>NOTE:</strong> If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in <em><strong>carbon.bad.records.action</strong></em>.</td>
 </tr>
 <tr>
 <td>carbon.bad.records.action</td>
@@ -372,58 +423,63 @@
 <tr>
 <td>carbon.options.bad.record.path</td>
 <td>(none)</td>
-<td>Specifies the HDFS path where bad records are to be stored. By default the value is Null. This path must to be configured by the user if <em><strong>carbon.options.bad.records.logger.enable</strong></em> is <strong>true</strong> or <em><strong>carbon.bad.records.action</strong></em> is <strong>REDIRECT</strong>.</td>
+<td>Specifies the HDFS path where bad records are to be stored. By default the value is Null. This path must be configured by the user if <em><strong>carbon.options.bad.records.logger.enable</strong></em> is <strong>true</strong> or <em><strong>carbon.bad.records.action</strong></em> is <strong>REDIRECT</strong>.</td>
 </tr>
 <tr>
 <td>carbon.blockletgroup.size.in.mb</td>
 <td>64</td>
-<td>Please refer to <a href="./file-structure-of-carbondata.html#carbondata-file-format">file-structure-of-carbondata</a> to understand the storage format of CarbonData. The data are read as a group of blocklets which are called blocklet groups. This parameter specifies the size of each blocklet group. Higher value results in better sequential IO access. The minimum value is 16MB, any value lesser than 16MB will reset to the default value (64MB).<strong>NOTE:</strong> Configuring a higher value might lead to poor performance as an entire blocklet group will have to read into memory before processing.For filter queries with limit, it is <strong>not advisable</strong> to have a bigger blocklet size. For Aggregation queries which need to return more number of rows,bigger blocklet size is advisable.</td>
+<td>Please refer to <a href="./file-structure-of-carbondata.html#carbondata-file-format">file-structure-of-carbondata</a> to understand the storage format of CarbonData. The data are read as a group of blocklets which are called blocklet groups. This parameter specifies the size of each blocklet group. Higher value results in better sequential IO access. The minimum value is 16MB, any value lesser than 16MB will reset to the default value (64MB). <strong>NOTE:</strong> Configuring a higher value might lead to poor performance as an entire blocklet group will have to read into memory before processing. For filter queries with limit, it is <strong>not advisable</strong> to have a bigger blocklet size. For aggregation queries which need to return more number of rows, bigger blocklet size is advisable.</td>
 </tr>
 <tr>
 <td>carbon.sort.file.write.buffer.size</td>
 <td>16384</td>
-<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. This configuration determines the buffer size to be used for reading and writing such files. <strong>NOTE:</strong> This configuration is useful to tune IO and derive optimal performance.Based on the OS and underlying harddisk type, these values can significantly affect the overall performance.It is ideal to tune the buffersize equivalent to the IO buffer size of the OS.Recommended range is between 10240 to 10485760 bytes.</td>
+<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. This configuration determines the buffer size to be used for reading and writing such files. <strong>NOTE:</strong> This configuration is useful to tune IO and derive optimal performance. Based on the OS and underlying harddisk type, these values can significantly affect the overall performance. It is ideal to tune the buffer size equivalent to the IO buffer size of the OS. Recommended range is between 10240 and 10485760 bytes.</td>
 </tr>
 <tr>
 <td>carbon.sort.intermediate.files.limit</td>
 <td>20</td>
-<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. Before writing the target carbondat file, the data in these intermediate files needs to be sorted again so as to ensure the entire data in the data load is sorted. This configuration determines the minimum number of intermediate files after which merged sort is applied on them sort the data.<strong>NOTE:</strong> Intermediate merging happens on a separate thread in the background.Number of threads used is determined by <em><strong>carbon.merge.sort.reader.thread</strong></em>.Configuring a low value will cause more time to be spent in merging these intermediate merged files which can cause more IO.Configuring a high value would cause not to use the idle threads to do intermediate sort merges.Range of recommended values are between 2 and 50</td>
-</tr>
-<tr>
-<td>carbon.csv.read.buffersize.byte</td>
-<td>1048576</td>
-<td>CarbonData uses Hadoop InputFormat to read the csv files. This configuration value is used to pass buffer size as input for the Hadoop MR job when reading the csv files. This value is configured in bytes.<strong>NOTE:</strong> Refer to <em><strong>org.apache.hadoop.mapreduce.InputFormat</strong></em> documentation for additional information.</td>
+<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. Before writing the target carbondata file, the records in these intermediate files needs to be merged to reduce the number of intermediate files. This configuration determines the minimum number of intermediate files after which merged sort is applied on them sort the data. <strong>NOTE:</strong> Intermediate merging happens on a separate thread in the background. Number of threads used is determined by <em><strong>carbon.merge.sort.reader.thread</strong></em>. Configuring a low value will cause more time to be spent in merging these intermediate merged files which can cause more IO. Configuring a high value would cause not to use the idle threads to do intermediate sort merges. Recommended range is between 2 and 50.</td>
 </tr>
 <tr>
 <td>carbon.merge.sort.reader.thread</td>
 <td>3</td>
-<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. When the intermediate files reaches <em><strong>carbon.sort.intermediate.files.limit</strong></em> the files will be merged,the number of threads specified in this configuration will be used to read the intermediate files for performing merge sort.<strong>NOTE:</strong> Refer to <em><strong>carbon.sort.intermediate.files.limit</strong></em> for operation description.Configuring less  number of threads can cause merging to slow down over loading process where as configuring more number of threads can cause thread contention with threads in other data loading steps.Hence configure a fraction of <em><strong>carbon.number.of.cores.while.loading</strong></em>.</td>
+<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. When the intermediate files reaches <em><strong>carbon.sort.intermediate.files.limit</strong></em>, the files will be merged in another thread pool. This value will control the size of the pool. Each thread will read the intermediate files and do merge sort and finally write the records to another file. <strong>NOTE:</strong> Refer to <em><strong>carbon.sort.intermediate.files.limit</strong></em> for operation description. Configuring smaller number of threads can cause merging slow down over loading process whereas configuring larger number of threads can cause thread contention with threads in other data loading steps. Hence configure a fraction of <em><strong>carbon.number.of.cores.while.loading</strong></em>.</td>
 </tr>
 <tr>
-<td>carbon.concurrent.lock.retries</td>
-<td>100</td>
-<td>CarbonData supports concurrent data loading onto same table. To ensure the loading status is correctly updated into the system,locks are used to sequence the status updation step. This configuration specifies the maximum number of retries to obtain the lock for updating the load status. <strong>NOTE:</strong> This value is high as more number of concurrent loading happens,more the chances of not able to obtain the lock when tried. Adjust this value according to the number of concurrent loading to be supported by the system.</td>
+<td>carbon.merge.sort.prefetch</td>
+<td>true</td>
+<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These intermediate temp files will have to be sorted using merge sort before writing into CarbonData format. This configuration enables pre fetching of data from these temp files in order to optimize IO and speed up data loading process.</td>
 </tr>
 <tr>
-<td>carbon.concurrent.lock.retry.timeout.sec</td>
-<td>1</td>
-<td>Specifies the interval between the retries to obtain the lock for concurrent operations. <strong>NOTE:</strong> Refer to <em><strong>carbon.concurrent.lock.retries</strong></em> for understanding why CarbonData uses locks during data loading operations.</td>
+<td>carbon.prefetch.buffersize</td>
+<td>1000</td>
+<td>When the configuration <em><strong>carbon.merge.sort.prefetch</strong></em> is configured to true, we need to set the number of records that can be prefetched. This configuration is used specify the number of records to be prefetched.**NOTE: **Configuring more number of records to be prefetched increases memory footprint as more records will have to be kept in memory.</td>
 </tr>
 <tr>
-<td>carbon.skip.empty.line</td>
+<td>enable.inmemory.merge.sort</td>
 <td>false</td>
-<td>The csv files givent to CarbonData for loading can contain empty lines. Based on the business scenario, this empty line might have to be ignored or needs to be treated as NULL value for all columns.In order to define this business behavior, this configuration is provided.<strong>NOTE:</strong> In order to consider NULL values for non string columns and continue with data load, <em><strong>carbon.bad.records.action</strong></em> need to be set to <strong>FORCE</strong>;else data load will be failed as bad records encountered.</td>
+<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. These intermediate files needs to be sorted again using merge sort before writing to the final carbondata file. Performing merge sort in memory would increase the sorting performance at the cost of increased memory footprint. This Configuration specifies to do in-memory merge sort or to do file based merge sort.</td>
+</tr>
+<tr>
+<td>carbon.sort.storage.inmemory.size.inmb</td>
+<td>512</td>
+<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. When <em><strong>enable.unsafe.sort</strong></em> configuration is enabled, instead of using <em><strong>carbon.sort.size</strong></em> which is based on rows count, size occupied in memory is used to determine when to flush data pages to intermediate temp files. This configuration determines the memory to be used for storing data pages in memory. <strong>NOTE:</strong> Configuring a higher value ensures more data is maintained in memory and hence increases data loading performance due to reduced or no IO. Based on the memory availability in the nodes of the cluster, configure the values accordingly.</td>
+</tr>
+<tr>
+<td>carbon.load.sortmemory.spill.percentage</td>
+<td>0</td>
+<td>During data loading, some data pages are kept in memory upto memory configured in <em><strong>carbon.sort.storage.inmemory.size.inmb</strong></em> beyond which they are spilled to disk as intermediate temporary sort files. This configuration determines after what percentage data needs to be spilled to disk. <strong>NOTE:</strong> Without this configuration, when the data pages occupy upto configured memory, new data pages would be dumped to disk and old pages are still maintained in disk.</td>
 </tr>
 <tr>
 <td>carbon.enable.calculate.size</td>
 <td>true</td>
 <td>
-<strong>For Load Operation</strong>: Setting this property calculates the size of the carbon data file (.carbondata) and carbon index file (.carbonindex) for every load and updates the table status file. <strong>For Describe Formatted</strong>: Setting this property calculates the total size of the carbon data files and carbon index files for the respective table and displays in describe formatted command. <strong>NOTE:</strong> This is useful to determine the overall size of the carbondata table and also get an idea of how the table is growing in order to take up other backup strategy decisions.</td>
+<strong>For Load Operation</strong>: Enabling this property will let carbondata calculate the size of the carbon data file (.carbondata) and the carbon index file (.carbonindex) for each load and update the table status file. <strong>For Describe Formatted</strong>: Enabling this property will let carbondata calculate the total size of the carbon data files and the carbon index files for the each table and display it in describe formatted command. <strong>NOTE:</strong> This is useful to determine the overall size of the carbondata table and also get an idea of how the table is growing in order to take up other backup strategy decisions.</td>
 </tr>
 <tr>
 <td>carbon.cutOffTimestamp</td>
 <td>(none)</td>
-<td>CarbonData has capability to generate the Dictionary values for the timestamp columns from the data itself without the need to store the computed dictionary values. This configuration sets the start date for calculating the timestamp. Java counts the number of milliseconds from start of "1970-01-01 00:00:00". This property is used to customize the start of position. For example "2000-01-01 00:00:00". <strong>NOTE:</strong> The date must be in the form <em><strong>carbon.timestamp.format</strong></em>. CarbonData supports storing data for upto 68 years.For example, if the cut-off time is 1970-01-01 05:30:00, then data upto 2038-01-01 05:30:00 will be supported by CarbonData.</td>
+<td>CarbonData has capability to generate the Dictionary values for the timestamp columns from the data itself without the need to store the computed dictionary values. This configuration sets the start date for calculating the timestamp. Java counts the number of milliseconds from start of "1970-01-01 00:00:00". This property is used to customize the start of position. For example "2000-01-01 00:00:00". <strong>NOTE:</strong> The date must be in the form <em><strong>carbon.timestamp.format</strong></em>. CarbonData supports storing data for upto 68 years. For example, if the cut-off time is 1970-01-01 05:30:00, then data upto 2038-01-01 05:30:00 will be supported by CarbonData.</td>
 </tr>
 <tr>
 <td>carbon.timegranularity</td>
@@ -432,99 +488,38 @@
 </tr>
 <tr>
 <td>carbon.use.local.dir</td>
-<td>false</td>
+<td>true</td>
 <td>CarbonData,during data loading, writes files to local temp directories before copying the files to HDFS. This configuration is used to specify whether CarbonData can write locally to tmp directory of the container or to the YARN application directory.</td>
 </tr>
 <tr>
-<td>carbon.use.multiple.temp.dir</td>
-<td>false</td>
-<td>When multiple disks are present in the system, YARN is generally configured with multiple disks to be used as temp directories for managing the containers. This configuration specifies whether to use multiple YARN local directories during data loading for disk IO load balancing.Enable <em><strong>carbon.use.local.dir</strong></em> for this configuration to take effect. <strong>NOTE:</strong> Data Loading is an IO intensive operation whose performance can be limited by the disk IO threshold, particularly during multi table concurrent data load.Configuring this parameter, balances the disk IO across multiple disks there by improving the over all load performance.</td>
-</tr>
-<tr>
 <td>carbon.sort.temp.compressor</td>
-<td>(none)</td>
-<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These temporary files can be compressed and written in order to save the storage space. This configuration specifies the name of compressor to be used to compress the intermediate sort temp files during sort procedure in data loading. The valid values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD' and empty. By default, empty means that Carbondata will not compress the sort temp files. <strong>NOTE:</strong> Compressor will be useful if you encounter disk bottleneck.Since the data needs to be compressed and decompressed,it involves additional CPU cycles,but is compensated by the high IO throughput due to less data to be written or read from the disks.</td>
+<td>SNAPPY</td>
+<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These temporary files can be compressed and written in order to save the storage space. This configuration specifies the name of compressor to be used to compress the intermediate sort temp files during sort procedure in data loading. The valid values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD' and empty. By default, empty means that Carbondata will not compress the sort temp files. <strong>NOTE:</strong> Compressor will be useful if you encounter disk bottleneck. Since the data needs to be compressed and decompressed,it involves additional CPU cycles,but is compensated by the high IO throughput due to less data to be written or read from the disks.</td>
 </tr>
 <tr>
 <td>carbon.load.skewedDataOptimization.enabled</td>
 <td>false</td>
-<td>During data loading,CarbonData would divide the number of blocks equally so as to ensure all executors process same number of blocks. This mechanism satisfies most of the scenarios and ensures maximum parallel processing for optimal data loading performance.In some business scenarios, there might be scenarios where the size of blocks vary significantly and hence some executors would have to do more work if they get blocks containing more data. This configuration enables size based block allocation strategy for data loading. When loading, carbondata will use file size based block allocation strategy for task distribution. It will make sure that all the executors process the same size of data.<strong>NOTE:</strong> This configuration is useful if the size of your input data files varies widely, say 1MB to 1GB.For this configuration to work effectively,knowing the data pattern and size is important and necessary.</td>
-</tr>
-<tr>
-<td>carbon.load.min.size.enabled</td>
-<td>false</td>
-<td>During Data Loading, CarbonData would divide the number of files among the available executors to parallelize the loading operation. When the input data files are very small, this action causes to generate many small carbondata files. This configuration determines whether to enable node minumun input data size allocation strategy for data loading.It will make sure that the node load the minimum amount of data there by reducing number of carbondata files.<strong>NOTE:</strong> This configuration is useful if the size of the input data files are very small, like 1MB to 256MB.Refer to <em><strong>load_min_size_inmb</strong></em> to configure the minimum size to be considered for splitting files among executors.</td>
+<td>During data loading,CarbonData would divide the number of blocks equally so as to ensure all executors process same number of blocks. This mechanism satisfies most of the scenarios and ensures maximum parallel processing for optimal data loading performance. In some business scenarios, there might be scenarios where the size of blocks vary significantly and hence some executors would have to do more work if they get blocks containing more data. This configuration enables size based block allocation strategy for data loading. When loading, carbondata will use file size based block allocation strategy for task distribution. It will make sure that all the executors process the same size of data.<strong>NOTE:</strong> This configuration is useful if the size of your input data files varies widely, say 1MB to 1GB. For this configuration to work effectively,knowing the data pattern and size is important and necessary.</td>
 </tr>
 <tr>
 <td>enable.data.loading.statistics</td>
 <td>false</td>
-<td>CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made <em><strong>true</strong></em> would log additional data loading statistics information to more accurately locate the issues being debugged. <strong>NOTE:</strong> Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time.It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all data loading performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging.</td>
+<td>CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made <em><strong>true</strong></em> would log additional data loading statistics information to more accurately locate the issues being debugged. <strong>NOTE:</strong> Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time. It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all data loading performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging.</td>
 </tr>
 <tr>
 <td>carbon.dictionary.chunk.size</td>
 <td>10000</td>
-<td>CarbonData generates dictionary keys and writes them to separate dictionary file during data loading. To optimize the IO, this configuration determines the number of dictionary keys to be persisted to dictionary file at a time. <strong>NOTE:</strong> Writing to file also serves as a commit point to the dictionary generated.Increasing more values in memory causes more data loss during system or application failure.It is advised to alter this configuration judiciously.</td>
+<td>CarbonData generates dictionary keys and writes them to separate dictionary file during data loading. To optimize the IO, this configuration determines the number of dictionary keys to be persisted to dictionary file at a time. <strong>NOTE:</strong> Writing to file also serves as a commit point to the dictionary generated. Increasing more values in memory causes more data loss during system or application failure. It is advised to alter this configuration judiciously.</td>
 </tr>
 <tr>
 <td>dictionary.worker.threads</td>
 <td>1</td>
-<td>CarbonData supports Optimized data loading by relying on a dictionary server. Dictionary server helps to maintain dictionary values independent of the data loading and there by avoids reading the same input data multiples times. This configuration determines the number of concurrent dictionary generation or request that needs to be served by the dictionary server. <strong>NOTE:</strong> This configuration takes effect when <em><strong>carbon.options.single.pass</strong></em> is configured as true.Please refer to <em>carbon.options.single.pass</em>to understand how dictionary server optimizes data loading.</td>
-</tr>
-<tr>
-<td>enable.unsafe.sort</td>
-<td>true</td>
-<td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. This configuration enables to use unsafe functions in CarbonData. <strong>NOTE:</strong> For operations like data loading, which generates more short lived Java objects, Java GC can be a bottle neck. Using unsafe can overcome the GC overhead and improve the overall performance.</td>
-</tr>
-<tr>
-<td>enable.offheap.sort</td>
-<td>true</td>
-<td>CarbonData supports storing data in off-heap memory for certain operations during data loading and query. This helps to avoid the Java GC and thereby improve the overall performance. This configuration enables using off-heap memory for sorting of data during data loading.<strong>NOTE:</strong>  <em><strong>enable.unsafe.sort</strong></em> configuration needs to be configured to true for using off-heap</td>
-</tr>
-<tr>
-<td>enable.inmemory.merge.sort</td>
-<td>false</td>
-<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. These intermediate files needs to be sorted again using merge sort before writing to the final carbondata file.Performing merge sort in memory would increase the sorting performance at the cost of increased memory footprint. This Configuration specifies to do in-memory merge sort or to do file based merge sort.</td>
-</tr>
-<tr>
-<td>carbon.load.sort.scope</td>
-<td>LOCAL_SORT</td>
-<td>CarbonData can support various sorting options to match the balance between load and query performance. LOCAL_SORT:All the data given to an executor in the single load is fully sorted and written to carbondata files. Data loading performance is reduced a little as the entire data needs to be sorted in the executor. BATCH_SORT:Sorts the data in batches of configured size and writes to carbondata files. Data loading performance increases as the entire data need not be sorted.But query performance will get reduced due to false positives in block pruning and also due to more number of carbondata files written.Due to more number of carbondata files, if identified blocks &gt; cluster parallelism, query performance and concurrency will get reduced.GLOBAL SORT:Entire data in the data load is fully sorted and written to carbondata files. Data loading performance would get reduced as the entire data needs to be sorted.But the query performance increases significantly due to very less fals
 e positives and concurrency is also improved. <strong>NOTE:</strong> when BATCH_SORT is configured, it is recommended to keep <em><strong>carbon.load.batch.sort.size.inmb</strong></em> &gt; <em><strong>carbon.blockletgroup.size.in.mb</strong></em>
-</td>
-</tr>
-<tr>
-<td>carbon.load.batch.sort.size.inmb</td>
-<td>0</td>
-<td>When  <em><strong>carbon.load.sort.scope</strong></em> is configured as <em><strong>BATCH_SORT</strong></em>, this configuration needs to be added to specify the batch size for sorting and writing to carbondata files. <strong>NOTE:</strong> It is recommended to keep the value around 45% of <em><strong>carbon.sort.storage.inmemory.size.inmb</strong></em> to avoid spill to disk. Also it is recommended to keep the value higher than <em><strong>carbon.blockletgroup.size.in.mb</strong></em>. Refer to <em>carbon.load.sort.scope</em> for more information on sort options and the advantages/disadvantages of each option.</td>
+<td>CarbonData supports Optimized data loading by relying on a dictionary server. Dictionary server helps to maintain dictionary values independent of the data loading and there by avoids reading the same input data multiples times. This configuration determines the number of concurrent dictionary generation or request that needs to be served by the dictionary server. <strong>NOTE:</strong> This configuration takes effect when <em><strong>carbon.options.single.pass</strong></em> is configured as true. Please refer to <em>carbon.options.single.pass</em>to understand how dictionary server optimizes data loading.</td>
 </tr>
 <tr>
 <td>carbon.dictionary.server.port</td>
 <td>2030</td>
-<td>Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.Single pass loading can be enabled using the option <em><strong>carbon.options.single.pass</strong></em>. When this option is specified, a dictionary server will be internally started to handle the dictionary generation and query requests. This configuration specifies the port on which the server need to listen for incoming requests.Port value ranges between 0-65535</td>
-</tr>
-<tr>
-<td>carbon.merge.sort.prefetch</td>
-<td>true</td>
-<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. These intermediate temp files will have to be sorted using merge sort before writing into CarbonData format. This configuration enables pre fetching of data from these temp files in order to optimize IO and speed up data loading process.</td>
-</tr>
-<tr>
-<td>carbon.loading.prefetch</td>
-<td>false</td>
-<td>CarbonData uses univocity parser to read csv files. This configuration is used to inform the parser whether it can prefetch the data from csv files to speed up the reading.<strong>NOTE:</strong> Enabling prefetch improves the data loading performance, but needs higher memory to keep more records which are read ahead from disk.</td>
-</tr>
-<tr>
-<td>carbon.prefetch.buffersize</td>
-<td>1000</td>
-<td>When the configuration <em><strong>carbon.merge.sort.prefetch</strong></em> is configured to true, we need to set the number of records that can be prefetched. This configuration is used specify the number of records to be prefetched.**NOTE: **Configuring more number of records to be prefetched increases memory footprint as more records will have to be kept in memory.</td>
-</tr>
-<tr>
-<td>load_min_size_inmb</td>
-<td>256</td>
-<td>This configuration is used along with <em><strong>carbon.load.min.size.enabled</strong></em>. This determines the minimum size of input files to be considered for distribution among executors while data loading.<strong>NOTE:</strong> Refer to <em><strong>carbon.load.min.size.enabled</strong></em> for understanding when this configuration needs to be used and its advantages and disadvantages.</td>
-</tr>
-<tr>
-<td>carbon.load.sortmemory.spill.percentage</td>
-<td>0</td>
-<td>During data loading, some data pages are kept in memory upto memory configured in <em><strong>carbon.sort.storage.inmemory.size.inmb</strong></em> beyond which they are spilled to disk as intermediate temporary sort files. This configuration determines after what percentage data needs to be spilled to disk. <strong>NOTE:</strong> Without this configuration, when the data pages occupy upto configured memory, new data pages would be dumped to disk and old pages are still maintained in disk.</td>
+<td>Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary. Single pass loading can be enabled using the option <em><strong>carbon.options.single.pass</strong></em>. When this option is specified, a dictionary server will be internally started to handle the dictionary generation and query requests. This configuration specifies the port on which the server need to listen for incoming requests. Port value ranges between 0-65535</td>
 </tr>
 <tr>
 <td>carbon.load.directWriteToStorePath.enabled</td>
@@ -537,11 +532,6 @@
 <td>Based on the business scenarios, some columns might need to be loaded with null values. As null value cannot be written in csv files, some special characters might be adopted to specify null values. This configuration can be used to specify the null values format in the data being loaded.</td>
 </tr>
 <tr>
-<td>carbon.sort.storage.inmemory.size.inmb</td>
-<td>512</td>
-<td>CarbonData writes every <em><strong>carbon.sort.size</strong></em> number of records to intermediate temp files during data loading to ensure memory footprint is within limits. When <em><strong>enable.unsafe.sort</strong></em> configuration is enabled, instead of using <em><strong>carbon.sort.size</strong></em> which is based on rows count, size occupied in memory is used to determine when to flush data pages to intermediate temp files. This configuration determines the memory to be used for storing data pages in memory. <strong>NOTE:</strong> Configuring a higher value ensures more data is maintained in memory and hence increases data loading performance due to reduced or no IO.Based on the memory availability in the nodes of the cluster, configure the values accordingly.</td>
-</tr>
-<tr>
 <td>carbon.column.compressor</td>
 <td>snappy</td>
 <td>CarbonData will compress the column values using the compressor specified by this configuration. Currently CarbonData supports 'snappy' and 'zstd' compressors.</td>
@@ -572,7 +562,7 @@
 <tr>
 <td>carbon.compaction.level.threshold</td>
 <td>4, 3</td>
-<td>Each CarbonData load will create one segment, if every load is small in size it will generate many small file over a period of time impacting the query performance. This configuration is for minor compaction which decides how many segments to be merged. Configuration is of the form (x,y). Compaction will be triggered for every x segments and form a single level 1 compacted segment. When the number of compacted level 1 segments reach y, compaction will be triggered again to merge them to form a single level 2 segment. For example: If it is set as 2, 3 then minor compaction will be triggered for every 2 segments. 3 is the number of level 1 compacted segments which is further compacted to new segment.<strong>NOTE:</strong> When <em><strong>carbon.enable.auto.load.merge</strong></em> is <strong>true</strong>, configuring higher values cause overall data loading time to increase as compaction will be triggered after data loading is complete but status is not returned till compaction 
 is complete. But compacting more number of segments can increase query performance.Hence optimal values needs to be configured based on the business scenario. Valid values are between 0 to 100.</td>
+<td>Each CarbonData load will create one segment, if every load is small in size it will generate many small file over a period of time impacting the query performance. This configuration is for minor compaction which decides how many segments to be merged. Configuration is of the form (x,y). Compaction will be triggered for every x segments and form a single level 1 compacted segment. When the number of compacted level 1 segments reach y, compaction will be triggered again to merge them to form a single level 2 segment. For example: If it is set as 2, 3 then minor compaction will be triggered for every 2 segments. 3 is the number of level 1 compacted segments which is further compacted to new segment.<strong>NOTE:</strong> When <em><strong>carbon.enable.auto.load.merge</strong></em> is <strong>true</strong>, configuring higher values cause overall data loading time to increase as compaction will be triggered after data loading is complete but status is not returned till compaction 
 is complete. But compacting more number of segments can increase query performance. Hence optimal values needs to be configured based on the business scenario. Valid values are between 0 to 100.</td>
 </tr>
 <tr>
 <td>carbon.major.compaction.size</td>
@@ -582,38 +572,38 @@
 <tr>
 <td>carbon.horizontal.compaction.enable</td>
 <td>true</td>
-<td>CarbonData supports DELETE/UPDATE functionality by creating delta data files for existing carbondata files. These delta files would grow as more number of DELETE/UPDATE operations are performed.Compaction of these delta files are termed as horizontal compaction. This configuration is used to turn ON/OFF horizontal compaction. After every DELETE and UPDATE statement, horizontal compaction may occur in case the delta (DELETE/ UPDATE) files becomes more than specified threshold.**NOTE: **Having many delta files will reduce the query performance as scan has to happen on all these files before the final state of data can be decided.Hence it is advisable to keep horizontal compaction enabled and configure reasonable values to <em><strong>carbon.horizontal.UPDATE.compaction.threshold</strong></em> and <em><strong>carbon.horizontal.DELETE.compaction.threshold</strong></em>
+<td>CarbonData supports DELETE/UPDATE functionality by creating delta data files for existing carbondata files. These delta files would grow as more number of DELETE/UPDATE operations are performed. Compaction of these delta files are termed as horizontal compaction. This configuration is used to turn ON/OFF horizontal compaction. After every DELETE and UPDATE statement, horizontal compaction may occur in case the delta (DELETE/ UPDATE) files becomes more than specified threshold.**NOTE: **Having many delta files will reduce the query performance as scan has to happen on all these files before the final state of data can be decided. Hence it is advisable to keep horizontal compaction enabled and configure reasonable values to <em><strong>carbon.horizontal.UPDATE.compaction.threshold</strong></em> and <em><strong>carbon.horizontal.DELETE.compaction.threshold</strong></em>
 </td>
 </tr>
 <tr>
 <td>carbon.horizontal.update.compaction.threshold</td>
 <td>1</td>
-<td>This configuration specifies the threshold limit on number of UPDATE delta files within a segment. In case the number of delta files goes beyond the threshold, the UPDATE delta files within the segment becomes eligible for horizontal compaction and are compacted into single UPDATE delta file.Values range between 1 to 10000.</td>
+<td>This configuration specifies the threshold limit on number of UPDATE delta files within a segment. In case the number of delta files goes beyond the threshold, the UPDATE delta files within the segment becomes eligible for horizontal compaction and are compacted into single UPDATE delta file. Values range between 1 to 10000.</td>
 </tr>
 <tr>
 <td>carbon.horizontal.delete.compaction.threshold</td>
 <td>1</td>
-<td>This configuration specifies the threshold limit on number of DELETE delta files within a block of a segment. In case the number of delta files goes beyond the threshold, the DELETE delta files for the particular block of the segment becomes eligible for horizontal compaction and are compacted into single DELETE delta file.Values range between 1 to 10000.</td>
+<td>This configuration specifies the threshold limit on number of DELETE delta files within a block of a segment. In case the number of delta files goes beyond the threshold, the DELETE delta files for the particular block of the segment becomes eligible for horizontal compaction and are compacted into single DELETE delta file. Values range between 1 to 10000.</td>
 </tr>
 <tr>
 <td>carbon.update.segment.parallelism</td>
 <td>1</td>
-<td>CarbonData processes the UPDATE operations by grouping records belonging to a segment into a single executor task. When the amount of data to be updated is more, this behavior causes problems like restarting of executor due to low memory and data-spill related errors. This property specifies the parallelism for each segment during update.<strong>NOTE:</strong> It is recommended to set this value to a multiple of the number of executors for balance.Values range between 1 to 1000.</td>
+<td>CarbonData processes the UPDATE operations by grouping records belonging to a segment into a single executor task. When the amount of data to be updated is more, this behavior causes problems like restarting of executor due to low memory and data-spill related errors. This property specifies the parallelism for each segment during update.<strong>NOTE:</strong> It is recommended to set this value to a multiple of the number of executors for balance. Values range between 1 to 1000.</td>
 </tr>
 <tr>
 <td>carbon.numberof.preserve.segments</td>
 <td>0</td>
-<td>If the user wants to preserve some number of segments from being compacted then he can set this configuration. Example: carbon.numberof.preserve.segments = 2 then 2 latest segments will always be excluded from the compaction. No segments will be preserved by default.<strong>NOTE:</strong> This configuration is useful when the chances of input data can be wrong due to environment scenarios.Preserving some of the latest segments from being compacted can help to easily delete the wrongly loaded segments.Once compacted,it becomes more difficult to determine the exact data to be deleted(except when data is incrementing according to time)</td>
+<td>If the user wants to preserve some number of segments from being compacted then he can set this configuration. Example: carbon.numberof.preserve.segments = 2 then 2 latest segments will always be excluded from the compaction. No segments will be preserved by default.<strong>NOTE:</strong> This configuration is useful when the chances of input data can be wrong due to environment scenarios. Preserving some of the latest segments from being compacted can help to easily delete the wrongly loaded segments. Once compacted,it becomes more difficult to determine the exact data to be deleted(except when data is incrementing according to time)</td>
 </tr>
 <tr>
 <td>carbon.allowed.compaction.days</td>
 <td>0</td>
-<td>This configuration is used to control on the number of recent segments that needs to be compacted, ignoring the older ones. This configuration is in days.For Example: If the configuration is 2, then the segments which are loaded in the time frame of past 2 days only will get merged. Segments which are loaded earlier than 2 days will not be merged. This configuration is disabled by default.<strong>NOTE:</strong> This configuration is useful when a bulk of history data is loaded into the carbondata.Query on this data is less frequent.In such cases involving these segments also into compaction will affect the resource consumption, increases overall compaction time.</td>
+<td>This configuration is used to control on the number of recent segments that needs to be compacted, ignoring the older ones. This configuration is in days. For Example: If the configuration is 2, then the segments which are loaded in the time frame of past 2 days only will get merged. Segments which are loaded earlier than 2 days will not be merged. This configuration is disabled by default.<strong>NOTE:</strong> This configuration is useful when a bulk of history data is loaded into the carbondata. Query on this data is less frequent. In such cases involving these segments also into compaction will affect the resource consumption, increases overall compaction time.</td>
 </tr>
 <tr>
 <td>carbon.enable.auto.load.merge</td>
 <td>false</td>
-<td>Compaction can be automatically triggered once data load completes. This ensures that the segments are merged in time and thus query times does not increase with increase in segments. This configuration enables to do compaction along with data loading.**NOTE: **Compaction will be triggered once the data load completes.But the status of data load wait till the compaction is completed.Hence it might look like data loading time has increased, but thats not the case.Moreover failure of compaction will not affect the data loading status.If data load had completed successfully, the status would be updated and segments are committed.However, failure while data loading, will not trigger compaction and error is returned immediately.</td>
+<td>Compaction can be automatically triggered once data load completes. This ensures that the segments are merged in time and thus query times does not increase with increase in segments. This configuration enables to do compaction along with data loading.**NOTE: **Compaction will be triggered once the data load completes. But the status of data load wait till the compaction is completed. Hence it might look like data loading time has increased, but thats not the case. Moreover failure of compaction will not affect the data loading status. If data load had completed successfully, the status would be updated and segments are committed. However, failure while data loading, will not trigger compaction and error is returned immediately.</td>
 </tr>
 <tr>
 <td>carbon.enable.page.level.reader.in.compaction</td>
@@ -628,12 +618,12 @@
 <tr>
 <td>carbon.compaction.prefetch.enable</td>
 <td>false</td>
-<td>Compaction operation is similar to Query + data load where in data from qualifying segments are queried and data loading performed to generate a new single segment. This configuration determines whether to query ahead data from segments and feed it for data loading. **NOTE: **This configuration is disabled by default as it needs extra resources for querying extra data.Based on the memory availability on the cluster, user can enable it to improve compaction performance.</td>
+<td>Compaction operation is similar to Query + data load where in data from qualifying segments are queried and data loading performed to generate a new single segment. This configuration determines whether to query ahead data from segments and feed it for data loading. **NOTE: **This configuration is disabled by default as it needs extra resources for querying extra data. Based on the memory availability on the cluster, user can enable it to improve compaction performance.</td>
 </tr>
 <tr>
 <td>carbon.merge.index.in.segment</td>
 <td>true</td>
-<td>Each CarbonData file has a companion CarbonIndex file which maintains the metadata about the data. These CarbonIndex files are read and loaded into driver and is used subsequently for pruning of data during queries. These CarbonIndex files are very small in size(few KB) and are many.Reading many small files from HDFS is not efficient and leads to slow IO performance.Hence these CarbonIndex files belonging to a segment can be combined into  a single file and read once there by increasing the IO throughput. This configuration enables to merge all the CarbonIndex files into a single MergeIndex file upon data loading completion.<strong>NOTE:</strong> Reading a single big file is more efficient in HDFS and IO throughput is very high.Due to this the time needed to load the index files into memory when query is received for the first time on that table is significantly reduced and there by significantly reduces the delay in serving the first query.</td>
+<td>Each CarbonData file has a companion CarbonIndex file which maintains the metadata about the data. These CarbonIndex files are read and loaded into driver and is used subsequently for pruning of data during queries. These CarbonIndex files are very small in size(few KB) and are many. Reading many small files from HDFS is not efficient and leads to slow IO performance. Hence these CarbonIndex files belonging to a segment can be combined into  a single file and read once there by increasing the IO throughput. This configuration enables to merge all the CarbonIndex files into a single MergeIndex file upon data loading completion.<strong>NOTE:</strong> Reading a single big file is more efficient in HDFS and IO throughput is very high. Due to this the time needed to load the index files into memory when query is received for the first time on that table is significantly reduced and there by significantly reduces the delay in serving the first query.</td>
 </tr>
 </tbody>
 </table>
@@ -651,12 +641,12 @@
 <tr>
 <td>carbon.max.driver.lru.cache.size</td>
 <td>-1</td>
-<td>Maximum memory <strong>(in MB)</strong> upto which the driver process can cache the data (BTree and dictionary values). Beyond this, least recently used data will be removed from cache before loading new set of values.Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. <strong>NOTE:</strong> Minimum number of entries that needs to be removed from cache in order to load the new set of data is determined and unloaded.ie.,for example if 3 cache entries qualify for pre-emption, out of these, those entries that free up more cache memory is removed prior to others. Please refer <a href="./faq.html#how-to-check-lru-cache-memory-footprint">FAQs</a> for checking LRU cache memory footprint.</td>
+<td>Maximum memory <strong>(in MB)</strong> upto which the driver process can cache the data (BTree and dictionary values). Beyond this, least recently used data will be removed from cache before loading new set of values. Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. <strong>NOTE:</strong> Minimum number of entries that needs to be removed from cache in order to load the new set of data is determined and unloaded.ie.,for example if 3 cache entries qualify for pre-emption, out of these, those entries that free up more cache memory is removed prior to others. Please refer <a href="./faq.html#how-to-check-lru-cache-memory-footprint">FAQs</a> for checking LRU cache memory footprint.</td>
 </tr>
 <tr>
 <td>carbon.max.executor.lru.cache.size</td>
 <td>-1</td>
-<td>Maximum memory <strong>(in MB)</strong> upto which the executor process can cache the data (BTree and reverse dictionary values).Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. <strong>NOTE:</strong> If this parameter is not configured, then the value of <em><strong>carbon.max.driver.lru.cache.size</strong></em> will be used.</td>
+<td>Maximum memory <strong>(in MB)</strong> upto which the executor process can cache the data (BTree and reverse dictionary values). Default value of -1 means there is no memory limit for caching. Only integer values greater than 0 are accepted. <strong>NOTE:</strong> If this parameter is not configured, then the value of <em><strong>carbon.max.driver.lru.cache.size</strong></em> will be used.</td>
 </tr>
 <tr>
 <td>max.query.execution.time</td>
@@ -669,12 +659,12 @@
 <td>CarbonData maintains the metadata which enables to prune unnecessary files from being scanned as per the query conditions. To achieve pruning, Min,Max of each column is maintined.Based on the filter condition in the query, certain data can be skipped from scanning by matching the filter value against the min,max values of the column(s) present in that carbondata file. This pruning enhances query performance significantly.</td>
 </tr>
 <tr>
-<td>carbon.dynamicallocation.schedulertimeout</td>
+<td>carbon.dynamical.location.scheduler.timeout</td>
 <td>5</td>
 <td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. To determine the number of tasks that can be scheduled, knowing the count of active executors is necessary. When dynamic allocation is enabled on a YARN based spark cluster, executor processes are shutdown if no request is received for a particular amount of time. The executors are brought up when the requet is received again. This configuration specifies the maximum time (unit in seconds) the carbon scheduler can wait for executor to be active. Minimum value is 5 sec and maximum value is 15 sec.**NOTE: **Waiting for longer time leads to slow query response time.Moreover it might be possible that YARN is not able to start the executors and waiting is not beneficial.</td>
 </tr>
 <tr>
-<td>carbon.scheduler.minregisteredresourcesratio</td>
+<td>carbon.scheduler.min.registered.resources.ratio</td>
 <td>0.8</td>
 <td>Specifies the minimum resource (executor) ratio needed for starting the block distribution. The default value is 0.8, which indicates 80% of the requested resource is allocated for starting block distribution. The minimum value is 0.1 min and the maximum value is 1.0.</td>
 </tr>
@@ -707,7 +697,7 @@
 <td>carbon.search.worker.workload.limit</td>
 <td>10 * <em>carbon.search.scan.thread</em>
 </td>
-<td>Maximum number of active requests that can be sent to a worker.Beyond which the request needs to be rescheduled for later time or to a different worker.</td>
+<td>Maximum number of active requests that can be sent to a worker. Beyond which the request needs to be rescheduled for later time or to a different worker.</td>
 </tr>
 <tr>
 <td>carbon.detail.batch.size</td>
@@ -722,17 +712,17 @@
 <tr>
 <td>carbon.task.distribution</td>
 <td>block</td>
-<td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData.Each of these task distribution suggestions has its own advantages and disadvantages.Based on the customer use case, appropriate task distribution can be configured.<strong>block</strong>: Setting this value will launch one task per block. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>custom</strong>: Setting this value will group the blocks and distribute it uniformly to the available resources in the cluster. This enhances the query performance but not suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>blocklet</strong>: Setting this value will launch one task per blocklet. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>merge_smal
 l_files</strong>: Setting this value will merge all the small carbondata files upto a bigger size configured by <em><strong>spark.sql.files.maxPartitionBytes</strong></em> (128 MB is the default value,it is configurable) during querying. The small carbondata files are combined to a map task to reduce the number of read task. This enhances the performance.</td>
+<td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. Each of these task distribution suggestions has its own advantages and disadvantages. Based on the customer use case, appropriate task distribution can be configured.<strong>block</strong>: Setting this value will launch one task per block. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>custom</strong>: Setting this value will group the blocks and distribute it uniformly to the available resources in the cluster. This enhances the query performance but not suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>blocklet</strong>: Setting this value will launch one task per blocklet. This setting is suggested in case of concurrent queries and queries having big shuffling scenarios. <strong>merge_sm
 all_files</strong>: Setting this value will merge all the small carbondata files upto a bigger size configured by <em><strong>spark.sql.files.maxPartitionBytes</strong></em> (128 MB is the default value,it is configurable) during querying. The small carbondata files are combined to a map task to reduce the number of read task. This enhances the performance.</td>
 </tr>
 <tr>
 <td>carbon.custom.block.distribution</td>
 <td>false</td>
-<td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. When this configuration is true, CarbonData would distribute the available blocks to be scanned among the available number of cores.For Example:If there are 10 blocks to be scanned and only 3 tasks can be run(only 3 executor cores available in the cluster), CarbonData would combine blocks as 4,3,3 and give it to 3 tasks to run. <strong>NOTE:</strong> When this configuration is false, as per the <em><strong>carbon.task.distribution</strong></em> configuration, each block/blocklet would be given to each task.</td>
+<td>CarbonData has its own scheduling algorithm to suggest to Spark on how many tasks needs to be launched and how much work each task need to do in a Spark cluster for any query on CarbonData. When this configuration is true, CarbonData would distribute the available blocks to be scanned among the available number of cores. For Example:If there are 10 blocks to be scanned and only 3 tasks can be run(only 3 executor cores available in the cluster), CarbonData would combine blocks as 4,3,3 and give it to 3 tasks to run. <strong>NOTE:</strong> When this configuration is false, as per the <em><strong>carbon.task.distribution</strong></em> configuration, each block/blocklet would be given to each task.</td>
 </tr>
 <tr>
 <td>enable.query.statistics</td>
 <td>false</td>
-<td>CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made <em><strong>true</strong></em> would log additional query statistics information to more accurately locate the issues being debugged.<strong>NOTE:</strong> Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time.It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all query performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging.</td>
+<td>CarbonData has extensive logging which would be useful for debugging issues related to performance or hard to locate issues. This configuration when made <em><strong>true</strong></em> would log additional query statistics information to more accurately locate the issues being debugged.<strong>NOTE:</strong> Enabling this would log more debug information to log files, there by increasing the log files size significantly in short span of time. It is advised to configure the log files size, retention of log files parameters in log4j properties appropriately. Also extensive logging is an increased IO operation and hence over all query performance might get reduced. Therefore it is recommended to enable this configuration only for the duration of debugging.</td>
 </tr>
 <tr>
 <td>enable.unsafe.in.query.processing</td>
@@ -740,14 +730,24 @@
 <td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. This configuration enables to use unsafe functions in CarbonData while scanning the  data during query.</td>
 </tr>
 <tr>
-<td>carbon.query.validate.directqueryondatamap</td>
+<td>carbon.query.validate.direct.query.on.datamap</td>
 <td>true</td>
 <td>CarbonData supports creating pre-aggregate table datamaps as an independent tables. For some debugging purposes, it might be required to directly query from such datamap tables. This configuration allows to query on such datamaps.</td>
 </tr>
 <tr>
+<td>carbon.max.driver.threads.for.block.pruning</td>
+<td>4</td>
+<td>Number of threads used for driver pruning when the carbon files are more than 100k Maximum memory. This configuration can used to set number of threads between 1 to 4.</td>
+</tr>
+<tr>
 <td>carbon.heap.memory.pooling.threshold.bytes</td>
 <td>1048576</td>
-<td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. Using unsafe, memory can be allocated on Java Heap or off heap. This configuration controls the allocation mechanism on Java HEAP.If the heap memory allocations of the given size is greater or equal than this value,it should go through the pooling mechanism.But if set this size to -1, it should not go through the pooling mechanism.Default value is 1048576(1MB, the same as Spark).Value to be specified in bytes.</td>
+<td>CarbonData supports unsafe operations of Java to avoid GC overhead for certain operations. Using unsafe, memory can be allocated on Java Heap or off heap. This configuration controls the allocation mechanism on Java HEAP. If the heap memory allocations of the given size is greater or equal than this value,it should go through the pooling mechanism. But if set this size to -1, it should not go through the pooling mechanism. Default value is 1048576(1MB, the same as Spark). Value to be specified in bytes.</td>
+</tr>
+<tr>
+<td>carbon.push.rowfilters.for.vector</td>
+<td>false</td>
+<td>When enabled complete row filters will be handled by carbon in case of vector. If it is disabled then only page level pruning will be done by carbon and row level filtering will be done by spark for vector. And also there are scan optimizations in carbon to avoid multiple data copies when this parameter is set to false. There is no change in flow for non-vector based queries.</td>
 </tr>
 </tbody>
 </table>
@@ -844,7 +844,7 @@
 <tbody>
 <tr>
 <td>carbon.options.bad.records.logger.enable</td>
-<td>CarbonData can identify the records that are not conformant to schema and isolate them as bad records.Enabling this configuration will make CarbonData to log such bad records.<strong>NOTE:</strong> If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in <em><strong>carbon.bad.records.action</strong></em>.</td>
+<td>CarbonData can identify the records that are not conformant to schema and isolate them as bad records. Enabling this configuration will make CarbonData to log such bad records.<strong>NOTE:</strong> If the input data contains many bad records, logging them will slow down the over all data loading throughput. The data load operation status would depend on the configuration in <em><strong>carbon.bad.records.action</strong></em>.</td>
 </tr>
 <tr>
 <td>carbon.options.bad.records.logger.enable</td>
@@ -864,7 +864,7 @@
 </tr>
 <tr>
 <td>carbon.options.single.pass</td>
-<td>Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary. This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE. <strong>NOTE:</strong> Enabling this starts a new dictionary server to handle dictionary generation requests during data loading. Without this option, the input csv files will have to read twice.Once while dictionary generation and persisting to the dictionary files.second when the data loading need to convert the input data into carbondata format.Enabling this optimizes the optimizes to read the input data only once there by reducing IO and hence over all data loading time.If concurrent data loading needs to be supported, consider tuning <em><strong>dictionary.worker.threads</strong></em>.Port on which the dictionary serv
 er need to listen on can be configured using the configuration <em><strong>carbon.dictionary.server.port</strong></em>.</td>
+<td>Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary. This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE. <strong>NOTE:</strong> Enabling this starts a new dictionary server to handle dictionary generation requests during data loading. Without this option, the input csv files will have to read twice. Once while dictionary generation and persisting to the dictionary files.second when the data loading need to convert the input data into carbondata format. Enabling this optimizes the optimizes to read the input data only once there by reducing IO and hence over all data loading time. If concurrent data loading needs to be supported, consider tuning <em><strong>dictionary.worker.threads</strong></em>. Port on which the dictionary 
 server need to listen on can be configured using the configuration <em><strong>carbon.dictionary.server.port</strong></em>.</td>
 </tr>
 <tr>
 <td>carbon.options.bad.record.path</td>
@@ -879,11 +879,11 @@
 <td>Specifies whether to use unsafe sort during data loading. Unsafe sort reduces the garbage collection during data load operation, resulting in better performance.</td>
 </tr>
 <tr>
-<td>carbon.options.dateformat</td>
+<td>carbon.options.date.format</td>
 <td>Specifies the data format of the date columns in the data being loaded</td>
 </tr>
 <tr>
-<td>carbon.options.timestampformat</td>
+<td>carbon.options.timestamp.format</td>
 <td>Specifies the timestamp format of the time stamp columns in the data being loaded</td>
 </tr>
 <tr>
@@ -900,7 +900,7 @@
 </tr>
 <tr>
 <td>carbon.query.directQueryOnDataMap.enabled</td>
-<td>Specifies whether datamap can be queried directly. This is useful for debugging purposes.**NOTE: **Refer to <a href="#query-configuration">Query Configuration</a>#carbon.query.validate.directqueryondatamap for detailed information.</td>
+<td>Specifies whether datamap can be queried directly. This is useful for debugging purposes.**NOTE: **Refer to <a href="#query-configuration">Query Configuration</a>#carbon.query.validate.direct.query.on.datamap for detailed information.</td>
 </tr>
 </tbody>
 </table>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/datamap-developer-guide.html
----------------------------------------------------------------------
diff --git a/content/datamap-developer-guide.html b/content/datamap-developer-guide.html
index f442fe2..286c21d 100644
--- a/content/datamap-developer-guide.html
+++ b/content/datamap-developer-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/datamap-management.html
----------------------------------------------------------------------
diff --git a/content/datamap-management.html b/content/datamap-management.html
index f5f9678..5dc2b33 100644
--- a/content/datamap-management.html
+++ b/content/datamap-management.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>


[6/8] carbondata-site git commit: Added 1.5.1 version information

Posted by ra...@apache.org.
http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/ddl-of-carbondata.html
----------------------------------------------------------------------
diff --git a/content/ddl-of-carbondata.html b/content/ddl-of-carbondata.html
index 434f378..7f84786 100644
--- a/content/ddl-of-carbondata.html
+++ b/content/ddl-of-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -223,7 +223,7 @@
 <p>CarbonData DDL statements are documented here,which includes:</p>
 <ul>
 <li>
-<a href="#create-table">CREATE TABLE</a>
+<p><a href="#create-table">CREATE TABLE</a></p>
 <ul>
 <li><a href="#dictionary-encoding-configuration">Dictionary Encoding</a></li>
 <li><a href="#inverted-index-configuration">Inverted Index</a></li>
@@ -239,19 +239,24 @@
 <li><a href="#string-longer-than-32000-characters">Extra Long String columns</a></li>
 <li><a href="#compression-for-table">Compression for Table</a></li>
 <li><a href="#bad-records-path">Bad Records Path</a></li>
+<li><a href="#load-minimum-data-size">Load Minimum Input File Size</a></li>
 </ul>
 </li>
-<li><a href="#create-table-as-select">CREATE TABLE AS SELECT</a></li>
 <li>
-<a href="#create-external-table">CREATE EXTERNAL TABLE</a>
+<p><a href="#create-table-as-select">CREATE TABLE AS SELECT</a></p>
+</li>
+<li>
+<p><a href="#create-external-table">CREATE EXTERNAL TABLE</a></p>
 <ul>
 <li><a href="#create-external-table-on-managed-table-data-location">External Table on Transactional table location</a></li>
 <li><a href="#create-external-table-on-non-transactional-table-data-location">External Table on non-transactional table location</a></li>
 </ul>
 </li>
-<li><a href="#create-database">CREATE DATABASE</a></li>
 <li>
-<a href="#table-management">TABLE MANAGEMENT</a>
+<p><a href="#create-database">CREATE DATABASE</a></p>
+</li>
+<li>
+<p><a href="#table-management">TABLE MANAGEMENT</a></p>
 <ul>
 <li><a href="#show-table">SHOW TABLE</a></li>
 <li>
@@ -271,7 +276,7 @@
 </ul>
 </li>
 <li>
-<a href="#partition">PARTITION</a>
+<p><a href="#partition">PARTITION</a></p>
 <ul>
 <li>
 <a href="#standard-partition">STANDARD PARTITION(HIVE)</a>
@@ -293,7 +298,9 @@
 <li><a href="#drop-a-partition">DROP PARTITION</a></li>
 </ul>
 </li>
-<li><a href="#bucketing">BUCKETING</a></li>
+<li>
+<p><a href="#bucketing">BUCKETING</a></p>
+</li>
 </ul>
 <h2>
 <a id="create-table" class="anchor" href="#create-table" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CREATE TABLE</h2>
@@ -324,6 +331,10 @@ STORED AS carbondata
 <td>Columns to exclude from inverted index generation</td>
 </tr>
 <tr>
+<td><a href="#inverted-index-configuration">INVERTED_INDEX</a></td>
+<td>Columns to include for inverted index generation</td>
+</tr>
+<tr>
 <td><a href="#sort-columns-configuration">SORT_COLUMNS</a></td>
 <td>Columns to include in sort and its order of sort</td>
 </tr>
@@ -403,6 +414,10 @@ STORED AS carbondata
 <td><a href="#bucketing">BUCKETCOLUMNS</a></td>
 <td>Columns which are to be placed in buckets</td>
 </tr>
+<tr>
+<td><a href="#load-minimum-data-size">LOAD_MIN_SIZE_INMB</a></td>
+<td>Minimum input data size per node for data loading</td>
+</tr>
 </tbody>
 </table>
 <p>Following are the guidelines for TBLPROPERTIES, CarbonData's additional table options can be set via carbon.properties.</p>
@@ -419,9 +434,9 @@ Suggested use cases : do dictionary encoding for low cardinality columns, it mig
 <li>
 <h5>
 <a id="inverted-index-configuration" class="anchor" href="#inverted-index-configuration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Inverted Index Configuration</h5>
-<p>By default inverted index is enabled, it might help to improve compression ratio and query speed, especially for low cardinality columns which are in reward position.
+<p>By default inverted index is disabled as store size will be reduced, it can be enabled by using a table property. It might help to improve compression ratio and query speed, especially for low cardinality columns which are in reward position.
 Suggested use cases : For high cardinality columns, you can disable the inverted index for improving the data loading performance.</p>
-<pre><code>TBLPROPERTIES ('NO_INVERTED_INDEX'='column1, column3')
+<pre><code>TBLPROPERTIES ('NO_INVERTED_INDEX'='column1', 'INVERTED_INDEX'='column2, column3')
 </code></pre>
 </li>
 <li>
@@ -549,6 +564,8 @@ Following are 5 configurations:</p>
 <li>TIMESTAMP</li>
 <li>DATE</li>
 <li>BOOLEAN</li>
+<li>FLOAT</li>
+<li>BYTE</li>
 </ul>
 </li>
 <li>
@@ -746,7 +763,7 @@ You can refer to SDKwriterTestCase for example.</p>
 <h5>
 <a id="compression-for-table" class="anchor" href="#compression-for-table" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Compression for table</h5>
 <p>Data compression is also supported by CarbonData.
-By default, Snappy is used to compress the data. CarbonData also support ZSTD compressor.
+By default, Snappy is used to compress the data. CarbonData also supports ZSTD compressor.
 User can specify the compressor in the table property:</p>
 <pre><code>TBLPROPERTIES('carbon.column.compressor'='snappy')
 </code></pre>
@@ -770,7 +787,19 @@ The corresponding system property is configured in carbon.properties file as bel
 As the table path remains the same after rename therefore the user can use this property to
 specify bad records path for the table at the time of creation, so that the same path can
 be later viewed in table description for reference.</p>
-<pre><code>  TBLPROPERTIES('BAD_RECORD_PATH'='/opt/badrecords'')
+<pre><code>  TBLPROPERTIES('BAD_RECORD_PATH'='/opt/badrecords')
+</code></pre>
+</li>
+<li>
+<h5>
+<a id="load-minimum-data-size" class="anchor" href="#load-minimum-data-size" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Load minimum data size</h5>
+<p>This property indicates the minimum input data size per node for data loading.
+By default it is not enabled. Setting a non-zero integer value will enable this feature.
+This property is useful if you have a large cluster and only want a small portion of the nodes to process data loading.
+For example, if you have a cluster with 10 nodes and the input data is about 1GB. Without this property, each node will process about 100MB input data and result in at least 10 data files. With this property configured with 512, only 2 nodes will be chosen to process the input data, each with about 512MB input and result in about 2 or 4 files based on the compress ratio.
+Moreover, this property can also be specified in the load option.
+Notice that once you enable this feature, for load balance, carbondata will ignore the data locality while assigning input data to nodes, this will cause more network traffic.</p>
+<pre><code>  TBLPROPERTIES('LOAD_MIN_SIZE_INMB'='256')
 </code></pre>
 </li>
 </ul>
@@ -832,14 +861,14 @@ checkAnswer(sql("SELECT count(*) from source"), sql("SELECT count(*) from origin
 <h3>
 <a id="create-external-table-on-non-transactional-table-data-location" class="anchor" href="#create-external-table-on-non-transactional-table-data-location" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Create external table on Non-Transactional table data location.</h3>
 <p>Non-Transactional table data location will have only carbondata and carbonindex files, there will not be a metadata folder (table status and schema).
-Our SDK module currently support writing data in this format.</p>
+Our SDK module currently supports writing data in this format.</p>
 <p><strong>Example:</strong></p>
 <pre><code>sql(
 s"""CREATE EXTERNAL TABLE sdkOutputTable STORED AS carbondata LOCATION
 |'$writerPath' """.stripMargin)
 </code></pre>
 <p>Here writer path will have carbondata and index files.
-This can be SDK output. Refer <a href="./sdk-guide.html">SDK Guide</a>.</p>
+This can be SDK output or C++ SDK output. Refer <a href="./sdk-guide.html">SDK Guide</a> and <a href="./csdk-guide.html">C++ SDK Guide</a>.</p>
 <p><strong>Note:</strong></p>
 <ol>
 <li>Dropping of the external table should not delete the files present in the location.</li>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/dml-of-carbondata.html
----------------------------------------------------------------------
diff --git a/content/dml-of-carbondata.html b/content/dml-of-carbondata.html
index a96578a..15ff807 100644
--- a/content/dml-of-carbondata.html
+++ b/content/dml-of-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -306,7 +306,7 @@ OPTIONS(property_name=property_value, ...)
 </tr>
 <tr>
 <td><a href="#sort-column-bounds">SORT_COLUMN_BOUNDS</a></td>
-<td>How to parititon the sort columns to make the evenly distributed</td>
+<td>How to partition the sort columns to make the evenly distributed</td>
 </tr>
 <tr>
 <td><a href="#single_pass">SINGLE_PASS</a></td>
@@ -353,7 +353,7 @@ OPTIONS(property_name=property_value, ...)
 <li>
 <h5>
 <a id="commentchar" class="anchor" href="#commentchar" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>COMMENTCHAR:</h5>
-<p>Comment Characters can be provided in the load command if user want to comment lines.</p>
+<p>Comment Characters can be provided in the load command if user wants to comment lines.</p>
 <pre><code>OPTIONS('COMMENTCHAR'='#')
 </code></pre>
 </li>
@@ -443,7 +443,7 @@ true: CSV file is with file header.</p>
 <p><strong>NOTE:</strong></p>
 <ul>
 <li>SORT_COLUMN_BOUNDS will be used only when the SORT_SCOPE is 'local_sort'.</li>
-<li>Carbondata will use these bounds as ranges to process data concurrently during the final sort percedure. The records will be sorted and written out inside each partition. Since the partition is sorted, all records will be sorted.</li>
+<li>Carbondata will use these bounds as ranges to process data concurrently during the final sort procedure. The records will be sorted and written out inside each partition. Since the partition is sorted, all records will be sorted.</li>
 <li>Since the actual order and literal order of the dictionary column are not necessarily the same, we do not recommend you to use this feature if the first sort column is 'dictionary_include'.</li>
 <li>The option works better if your CPU usage during loading is low. If your current system CPU usage is high, better not to use this option. Besides, it depends on the user to specify the bounds. If user does not know the exactly bounds to make the data distributed evenly among the bounds, loading performance will still be better than before or at least the same as before.</li>
 <li>Users can find more information about this option in the description of PR1953.</li>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/documentation.html
----------------------------------------------------------------------
diff --git a/content/documentation.html b/content/documentation.html
index aeab9cc..e49cdae 100644
--- a/content/documentation.html
+++ b/content/documentation.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -226,7 +226,7 @@
 <p><strong>File Format Concepts:</strong> Start with the basics of understanding the <a href="./file-structure-of-carbondata.html#carbondata-file-format">CarbonData file format</a> and its <a href="./file-structure-of-carbondata.html">storage structure</a>. This will help to understand other parts of the documentation, including deployment, programming and usage guides.</p>
 <p><strong>Quick Start:</strong> <a href="./quick-start-guide.html#installing-and-configuring-carbondata-to-run-locally-with-spark-shell">Run an example program</a> on your local machine or <a href="https://github.com/apache/carbondata/tree/master/examples/spark2/src/main/scala/org/apache/carbondata/examples" target=_blank>study some examples</a>.</p>
 <p><strong>CarbonData SQL Language Reference:</strong> CarbonData extends the Spark SQL language and adds several <a href="./ddl-of-carbondata.html">DDL</a> and <a href="./dml-of-carbondata.html">DML</a> statements to support operations on it.Refer to the <a href="./language-manual.html">Reference Manual</a> to understand the supported features and functions.</p>
-<p><strong>Programming Guides:</strong> You can read our guides about <a href="./sdk-guide.html">APIs supported</a> to learn how to integrate CarbonData with your applications.</p>
+<p><strong>Programming Guides:</strong> You can read our guides about <a href="./sdk-guide.html">Java APIs supported</a> or <a href="./csdk-guide.html">C++ APIs supported</a> to learn how to integrate CarbonData with your applications.</p>
 <h2>
 <a id="integration" class="anchor" href="#integration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Integration</h2>
 <p>CarbonData can be integrated with popular Execution engines like <a href="./quick-start-guide.html#spark">Spark</a> and <a href="./quick-start-guide.html#presto">Presto</a>.Refer to the <a href="./quick-start-guide.html#integration">Installation and Configuration</a> section to understand all modes of Integrating CarbonData.</p>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/faq.html
----------------------------------------------------------------------
diff --git a/content/faq.html b/content/faq.html
index 2068c17..a42bbb8 100644
--- a/content/faq.html
+++ b/content/faq.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -390,17 +390,15 @@ TimeZone.setDefault(TimeZone.getTimeZone("Asia/Shanghai"))
 <h2>
 <a id="how-to-check-lru-cache-memory-footprint" class="anchor" href="#how-to-check-lru-cache-memory-footprint" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>How to check LRU cache memory footprint?</h2>
 <p>To observe the LRU cache memory footprint in the logs, configure the below properties in log4j.properties file.</p>
-<pre><code>log4j.logger.org.apache.carbondata.core.memory.UnsafeMemoryManager = DEBUG
-log4j.logger.org.apache.carbondata.core.cache.CarbonLRUCache = DEBUG
+<pre><code>log4j.logger.org.apache.carbondata.core.cache.CarbonLRUCache = DEBUG
 </code></pre>
-<p>These properties will enable the DEBUG log for the CarbonLRUCache and UnsafeMemoryManager which will print the information of memory consumed using which the LRU cache size can be decided. <strong>Note:</strong> Enabling the DEBUG log will degrade the query performance.</p>
+<p>This property will enable the DEBUG log for the CarbonLRUCache and UnsafeMemoryManager which will print the information of memory consumed using which the LRU cache size can be decided. <strong>Note:</strong> Enabling the DEBUG log will degrade the query performance. Ensure carbon.max.driver.lru.cache.size is configured to observe the current cache size.</p>
 <p><strong>Example:</strong></p>
-<pre><code>18/09/26 15:05:28 DEBUG UnsafeMemoryManager: pool-44-thread-1 Memory block (org.apache.carbondata.core.memory.MemoryBlock@21312095) is created with size 10. Total memory used 413Bytes, left 536870499Bytes
-18/09/26 15:05:29 DEBUG CarbonLRUCache: main Required size for entry /home/target/store/default/stored_as_carbondata_table/Fact/Part0/Segment_0/0_1537954529044.carbonindexmerge :: 181 Current cache size :: 0
-18/09/26 15:05:30 DEBUG UnsafeMemoryManager: main Freeing memory of size: 105available memory:  536870836
-18/09/26 15:05:30 DEBUG UnsafeMemoryManager: main Freeing memory of size: 76available memory:  536870912
+<pre><code>18/09/26 15:05:29 DEBUG CarbonLRUCache: main Required size for entry /home/target/store/default/stored_as_carbondata_table/Fact/Part0/Segment_0/0_1537954529044.carbonindexmerge :: 181 Current cache size :: 0
 18/09/26 15:05:30 INFO CarbonLRUCache: main Removed entry from InMemory lru cache :: /home/target/store/default/stored_as_carbondata_table/Fact/Part0/Segment_0/0_1537954529044.carbonindexmerge
 </code></pre>
+<p><strong>Note:</strong> If  <code>Removed entry from InMemory LRU cache</code> are frequently observed in logs, you may have to increase the configured LRU size.</p>
+<p>To observe the LRU cache from heap dump, check the heap used by CarbonLRUCache class.</p>
 <h2>
 <a id="getting-tablestatuslock-issues-when-loading-data" class="anchor" href="#getting-tablestatuslock-issues-when-loading-data" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Getting tablestatus.lock issues When loading data</h2>
 <p><strong>Symptom</strong></p>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/file-structure-of-carbondata.html
----------------------------------------------------------------------
diff --git a/content/file-structure-of-carbondata.html b/content/file-structure-of-carbondata.html
index baa34db..5230ba3 100644
--- a/content/file-structure-of-carbondata.html
+++ b/content/file-structure-of-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -313,8 +313,7 @@ Several encodings that may be used in CarbonData files.</li>
 <p><a href="../docs/images/2-3_3.png?raw=true" target="_blank" rel="noopener noreferrer"><img src="https://github.com/apache/carbondata/blob/master/docs/images/2-3_3.png?raw=true" alt="V3" style="max-width:100%;"></a></p>
 <h4>
 <a id="footer-format" class="anchor" href="#footer-format" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Footer format</h4>
-<p>Footer records each carbondata
-All blocklet data distribution information and statistical related metadata information (minmax, startkey/endkey) inside the file.</p>
+<p>Footer records each carbondata, all blocklet data distribution information and statistical related metadata information (minmax, startkey/endkey) inside the file.</p>
 <p><a href="../docs/images/2-3_4.png?raw=true" target="_blank" rel="noopener noreferrer"><img src="https://github.com/apache/carbondata/blob/master/docs/images/2-3_4.png?raw=true" alt="Footer format" style="max-width:100%;"></a></p>
 <ol>
 <li>BlockletInfo3 is used to record the offset and length of all ColumnChunk3.</li>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/how-to-contribute-to-apache-carbondata.html
----------------------------------------------------------------------
diff --git a/content/how-to-contribute-to-apache-carbondata.html b/content/how-to-contribute-to-apache-carbondata.html
index eae12e3..a6dc1ee 100644
--- a/content/how-to-contribute-to-apache-carbondata.html
+++ b/content/how-to-contribute-to-apache-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/index.html
----------------------------------------------------------------------
diff --git a/content/index.html b/content/index.html
index ea86500..54bf380 100644
--- a/content/index.html
+++ b/content/index.html
@@ -54,6 +54,9 @@
                                 class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -66,9 +69,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -313,6 +313,13 @@
                                 </h4>
                                 <div class="linkblock">
                                     <div class="block-row">
+                                        <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                           target="_blank">Apache CarbonData 1.5.1</a>
+                                        <span class="release-date">Dec 2018</span>
+                                        <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Apache+CarbonData+1.5.1+Release"
+                                           class="whatsnew" target="_blank">what's new</a>
+                                    </div>
+                                    <div class="block-row">
                                         <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                            target="_blank">Apache CarbonData 1.5.0</a>
                                         <span class="release-date">Oct 2018</span>
@@ -478,7 +485,7 @@
                             to do is:</p>
                         <ol class="orderlist">
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
                                    target="_blank">Download</a>the latest release.
 
                             </li>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/introduction.html
----------------------------------------------------------------------
diff --git a/content/introduction.html b/content/introduction.html
index 51a46c2..0cfa369 100644
--- a/content/introduction.html
+++ b/content/introduction.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/language-manual.html
----------------------------------------------------------------------
diff --git a/content/language-manual.html b/content/language-manual.html
index 9f738b8..a95de91 100644
--- a/content/language-manual.html
+++ b/content/language-manual.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/lucene-datamap-guide.html
----------------------------------------------------------------------
diff --git a/content/lucene-datamap-guide.html b/content/lucene-datamap-guide.html
index f461ca5..ef819a5 100644
--- a/content/lucene-datamap-guide.html
+++ b/content/lucene-datamap-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/performance-tuning.html
----------------------------------------------------------------------
diff --git a/content/performance-tuning.html b/content/performance-tuning.html
index e077e85..e539614 100644
--- a/content/performance-tuning.html
+++ b/content/performance-tuning.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -399,8 +399,8 @@ You can configure CarbonData by tuning following properties in carbon.properties
 </tr>
 <tr>
 <td>carbon.sort.file.write.buffer.size</td>
-<td>Default:  50000.</td>
-<td>DataOutputStream buffer.</td>
+<td>Default:  16384.</td>
+<td>CarbonData sorts and writes data to intermediate files to limit the memory usage. This configuration determines the buffer size to be used for reading and writing such files.</td>
 </tr>
 <tr>
 <td>carbon.merge.sort.reader.thread</td>
@@ -474,7 +474,7 @@ scenarios. After the completion of POC, some of the configurations impacting the
 <tr>
 <td>carbon.detail.batch.size</td>
 <td>spark/carbonlib/carbon.properties</td>
-<td>Data loading</td>
+<td>Querying</td>
 <td>The buffer size to store records, returned from the block scan.</td>
 <td>In limit scenario this parameter is very important. For example your query limit is 1000. But if we set this value to 3000 that means we get 3000 records from scan but spark will only take 1000 rows. So the 2000 remaining are useless. In one Finance test case after we set it to 100, in the limit 1000 scenario the performance increase about 2 times in comparison to if we set this value to 12000.</td>
 </tr>
@@ -486,18 +486,11 @@ scenarios. After the completion of POC, some of the configurations impacting the
 <td>If this is set it to true CarbonData will use YARN local directories for multi-table load disk load balance, that will improve the data load performance.</td>
 </tr>
 <tr>
-<td>carbon.use.multiple.temp.dir</td>
-<td>spark/carbonlib/carbon.properties</td>
-<td>Data loading</td>
-<td>Whether to use multiple YARN local directories during table data loading for disk load balance</td>
-<td>After enabling 'carbon.use.local.dir', if this is set to true, CarbonData will use all YARN local directories during data load for disk load balance, that will improve the data load performance. Please enable this property when you encounter disk hotspot problem during data loading.</td>
-</tr>
-<tr>
 <td>carbon.sort.temp.compressor</td>
 <td>spark/carbonlib/carbon.properties</td>
 <td>Data loading</td>
 <td>Specify the name of compressor to compress the intermediate sort temporary files during sort procedure in data loading.</td>
-<td>The optional values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD', and empty. By default, empty means that Carbondata will not compress the sort temp files. This parameter will be useful if you encounter disk bottleneck.</td>
+<td>The optional values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD', and empty. Specially, empty means that Carbondata will not compress the sort temp files. This parameter will be useful if you encounter disk bottleneck.</td>
 </tr>
 <tr>
 <td>carbon.load.skewedDataOptimization.enabled</td>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/preaggregate-datamap-guide.html
----------------------------------------------------------------------
diff --git a/content/preaggregate-datamap-guide.html b/content/preaggregate-datamap-guide.html
index e4f1c91..5e0d4e3 100644
--- a/content/preaggregate-datamap-guide.html
+++ b/content/preaggregate-datamap-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/quick-start-guide.html
----------------------------------------------------------------------
diff --git a/content/quick-start-guide.html b/content/quick-start-guide.html
index 4703ed3..a2f093d 100644
--- a/content/quick-start-guide.html
+++ b/content/quick-start-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -564,7 +564,7 @@ hdfs://&lt;host_name&gt;:port/user/hive/warehouse/carbon.store
 <h2>
 <a id="installing-and-configuring-carbondata-on-presto" class="anchor" href="#installing-and-configuring-carbondata-on-presto" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Installing and Configuring CarbonData on Presto</h2>
 <p><strong>NOTE:</strong> <strong>CarbonData tables cannot be created nor loaded from Presto. User need to create CarbonData Table and load data into it
-either with <a href="#installing-and-configuring-carbondata-to-run-locally-with-spark-shell">Spark</a> or <a href="./sdk-guide.html">SDK</a>.
+either with <a href="#installing-and-configuring-carbondata-to-run-locally-with-spark-shell">Spark</a> or <a href="./sdk-guide.html">SDK</a> or <a href="./csdk-guide.html">C++ SDK</a>.
 Once the table is created,it can be queried from Presto.</strong></p>
 <h3>
 <a id="installing-presto" class="anchor" href="#installing-presto" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Installing Presto</h3>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/release-guide.html
----------------------------------------------------------------------
diff --git a/content/release-guide.html b/content/release-guide.html
index c40f316..dcdaba3 100644
--- a/content/release-guide.html
+++ b/content/release-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/s3-guide.html
----------------------------------------------------------------------
diff --git a/content/s3-guide.html b/content/s3-guide.html
index 27c1c66..ba25dfb 100644
--- a/content/s3-guide.html
+++ b/content/s3-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/sdk-guide.html
----------------------------------------------------------------------
diff --git a/content/sdk-guide.html b/content/sdk-guide.html
index f33d5f9..37d6b26 100644
--- a/content/sdk-guide.html
+++ b/content/sdk-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -227,10 +227,14 @@
 </ol>
 <h1>
 <a id="sdk-writer" class="anchor" href="#sdk-writer" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>SDK Writer</h1>
-<p>In the carbon jars package, there exist a carbondata-store-sdk-x.x.x-SNAPSHOT.jar, including SDK writer and reader.</p>
+<p>In the carbon jars package, there exist a carbondata-store-sdk-x.x.x-SNAPSHOT.jar, including SDK writer and reader.
+If user want to use SDK, except carbondata-store-sdk-x.x.x-SNAPSHOT.jar,
+it needs carbondata-core-x.x.x-SNAPSHOT.jar, carbondata-common-x.x.x-SNAPSHOT.jar,
+carbondata-format-x.x.x-SNAPSHOT.jar, carbondata-hadoop-x.x.x-SNAPSHOT.jar and carbondata-processing-x.x.x-SNAPSHOT.jar.
+What's more, user also can use carbondata-sdk.jar directly.</p>
 <p>This SDK writer, writes carbondata file and carbonindex file at a given path.
 External client can make use of this writer to convert other format data or live data to create carbondata and index files.
-These SDK writer output contains just a carbondata and carbonindex files. No metadata folder will be present.</p>
+These SDK writer output contains just carbondata and carbonindex files. No metadata folder will be present.</p>
 <h2>
 <a id="quick-example" class="anchor" href="#quick-example" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Quick example</h2>
 <h3>
@@ -267,7 +271,7 @@ These SDK writer output contains just a carbondata and carbonindex files. No met
 
      <span class="pl-smi">CarbonProperties</span><span class="pl-k">.</span>getInstance()<span class="pl-k">.</span>addProperty(<span class="pl-s"><span class="pl-pds">"</span>enable.offheap.sort<span class="pl-pds">"</span></span>, enableOffheap);
  
-     <span class="pl-smi">CarbonWriterBuilder</span> builder <span class="pl-k">=</span> <span class="pl-smi">CarbonWriter</span><span class="pl-k">.</span>builder()<span class="pl-k">.</span>outputPath(path)<span class="pl-k">.</span>withCsvInput(schema);
+     <span class="pl-smi">CarbonWriterBuilder</span> builder <span class="pl-k">=</span> <span class="pl-smi">CarbonWriter</span><span class="pl-k">.</span>builder()<span class="pl-k">.</span>outputPath(path)<span class="pl-k">.</span>withCsvInput(schema)<span class="pl-k">.</span>writtenBy(<span class="pl-s"><span class="pl-pds">"</span>SDK<span class="pl-pds">"</span></span>);
  
      <span class="pl-smi">CarbonWriter</span> writer <span class="pl-k">=</span> builder<span class="pl-k">.</span>build();
  
@@ -322,7 +326,7 @@ These SDK writer output contains just a carbondata and carbonindex files. No met
     <span class="pl-k">try</span> {
       <span class="pl-smi">CarbonWriter</span> writer <span class="pl-k">=</span> <span class="pl-smi">CarbonWriter</span><span class="pl-k">.</span>builder()
           .outputPath(path)
-          .withAvroInput(<span class="pl-k">new</span> <span class="pl-smi">org.apache.avro<span class="pl-k">.</span>Schema</span>.<span class="pl-smi">Parser</span>()<span class="pl-k">.</span>parse(avroSchema))<span class="pl-k">.</span>build();
+          .withAvroInput(<span class="pl-k">new</span> <span class="pl-smi">org.apache.avro<span class="pl-k">.</span>Schema</span>.<span class="pl-smi">Parser</span>()<span class="pl-k">.</span>parse(avroSchema))<span class="pl-k">.</span>writtenBy(<span class="pl-s"><span class="pl-pds">"</span>SDK<span class="pl-pds">"</span></span>)<span class="pl-k">.</span>build();
 
       <span class="pl-k">for</span> (<span class="pl-k">int</span> i <span class="pl-k">=</span> <span class="pl-c1">0</span>; i <span class="pl-k">&lt;</span> <span class="pl-c1">100</span>; i<span class="pl-k">++</span>) {
         writer<span class="pl-k">.</span>write(record);
@@ -360,7 +364,7 @@ These SDK writer output contains just a carbondata and carbonindex files. No met
 
     <span class="pl-smi">Schema</span> <span class="pl-smi">CarbonSchema</span> <span class="pl-k">=</span> <span class="pl-k">new</span> <span class="pl-smi">Schema</span>(fields);
 
-    <span class="pl-smi">CarbonWriterBuilder</span> builder <span class="pl-k">=</span> <span class="pl-smi">CarbonWriter</span><span class="pl-k">.</span>builder()<span class="pl-k">.</span>outputPath(path)<span class="pl-k">.</span>withJsonInput(<span class="pl-smi">CarbonSchema</span>);
+    <span class="pl-smi">CarbonWriterBuilder</span> builder <span class="pl-k">=</span> <span class="pl-smi">CarbonWriter</span><span class="pl-k">.</span>builder()<span class="pl-k">.</span>outputPath(path)<span class="pl-k">.</span>withJsonInput(<span class="pl-smi">CarbonSchema</span>)<span class="pl-k">.</span>writtenBy(<span class="pl-s"><span class="pl-pds">"</span>SDK<span class="pl-pds">"</span></span>);
 
     <span class="pl-c"><span class="pl-c">//</span> initialize json writer with carbon schema</span>
     <span class="pl-smi">CarbonWriter</span> writer <span class="pl-k">=</span> builder<span class="pl-k">.</span>build();
@@ -644,6 +648,8 @@ public CarbonWriterBuilder withLoadOptions(Map&lt;String, String&gt; options);
 * j. sort_scope -- "local_sort", "no_sort", "batch_sort". default value is "local_sort"
 * k. long_string_columns -- comma separated string columns which are more than 32k length. 
 *                           default value is null.
+* l. inverted_index -- comma separated string columns for which inverted index needs to be
+*                      generated
 *
 * @return updated CarbonWriterBuilder
 */
@@ -667,6 +673,15 @@ public CarbonWriterBuilder withThreadSafe(short numOfThreads);
 */
 public CarbonWriterBuilder withHadoopConf(Configuration conf)
 </code></pre>
+<pre><code>  /**
+   * Updates the hadoop configuration with the given key value
+   *
+   * @param key   key word
+   * @param value value
+   * @return this object
+   */
+  public CarbonWriterBuilder withHadoopConf(String key, String value);
+</code></pre>
 <pre><code>/**
 * to build a {@link CarbonWriter}, which accepts row in CSV format
 *
@@ -692,6 +707,23 @@ public CarbonWriterBuilder withAvroInput(org.apache.avro.Schema avroSchema);
 public CarbonWriterBuilder withJsonInput(Schema carbonSchema);
 </code></pre>
 <pre><code>/**
+* To support writing the ApplicationName which is writing the carbondata file
+* This is a mandatory API to call, else the build() call will fail with error.
+* @param application name which is writing the carbondata files
+* @return CarbonWriterBuilder
+*/
+public CarbonWriterBuilder writtenBy(String appName) {
+</code></pre>
+<pre><code>/**
+* sets the list of columns for which inverted index needs to generated
+* @param invertedIndexColumns is a string array of columns for which inverted index needs to
+* generated.
+* If it is null or an empty array, inverted index will be generated for none of the columns
+* @return updated CarbonWriterBuilder
+*/
+public CarbonWriterBuilder invertedIndexFor(String[] invertedIndexColumns);
+</code></pre>
+<pre><code>/**
 * Build a {@link CarbonWriter}
 * This writer is not thread safe,
 * use withThreadSafe() configuration in multi thread environment
@@ -702,9 +734,22 @@ public CarbonWriterBuilder withJsonInput(Schema carbonSchema);
 */
 public CarbonWriter build() throws IOException, InvalidLoadOptionException;
 </code></pre>
+<pre><code> /**
+   * Configure Row Record Reader for reading.
+   *
+   */
+  public CarbonReaderBuilder withRowRecordReader()
+</code></pre>
 <h3>
 <a id="class-orgapachecarbondatasdkfilecarbonwriter" class="anchor" href="#class-orgapachecarbondatasdkfilecarbonwriter" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Class org.apache.carbondata.sdk.file.CarbonWriter</h3>
 <pre><code>/**
+* Create a {@link CarbonWriterBuilder} to build a {@link CarbonWriter}
+*/
+public static CarbonWriterBuilder builder() {
+    return new CarbonWriterBuilder();
+}
+</code></pre>
+<pre><code>/**
 * Write an object to the file, the format of the object depends on the implementation
 * If AvroCarbonWriter, object is of type org.apache.avro.generic.GenericData.Record, 
 *                      which is one row of data.
@@ -720,13 +765,6 @@ public abstract void write(Object object) throws IOException;
 */
 public abstract void close() throws IOException;
 </code></pre>
-<pre><code>/**
-* Create a {@link CarbonWriterBuilder} to build a {@link CarbonWriter}
-*/
-public static CarbonWriterBuilder builder() {
-    return new CarbonWriterBuilder();
-}
-</code></pre>
 <h3>
 <a id="class-orgapachecarbondatasdkfilefield" class="anchor" href="#class-orgapachecarbondatasdkfilefield" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Class org.apache.carbondata.sdk.file.Field</h3>
 <pre><code>/**
@@ -824,6 +862,24 @@ External client can make use of this reader to read CarbonData files without Car
    */
   public static CarbonReaderBuilder builder(String tablePath);
 </code></pre>
+<pre><code>/**
+  * Breaks the list of CarbonRecordReader in CarbonReader into multiple
+  * CarbonReader objects, each iterating through some 'carbondata' files
+  * and return that list of CarbonReader objects
+  *
+  * If the no. of files is greater than maxSplits, then break the
+  * CarbonReader into maxSplits splits, with each split iterating
+  * through &gt;= 1 file.
+  *
+  * If the no. of files is less than maxSplits, then return list of
+  * CarbonReader with size as the no. of files, with each CarbonReader
+  * iterating through exactly one file
+  *
+  * @param maxSplits: Int
+  * @return list of CarbonReader objects
+  */
+  public List&lt;CarbonReader&gt; split(int maxSplits);
+</code></pre>
 <pre><code>  /**
    * Return true if has next row
    */
@@ -835,6 +891,11 @@ External client can make use of this reader to read CarbonData files without Car
   public T readNextRow();
 </code></pre>
 <pre><code>  /**
+   * Read and return next batch row objects
+   */
+  public Object[] readNextBatchRow();
+</code></pre>
+<pre><code>  /**
    * Close reader
    */
   public void close();
@@ -865,6 +926,14 @@ External client can make use of this reader to read CarbonData files without Car
   */
   public CarbonReaderBuilder filter(Expression filterExpression);
 </code></pre>
+<pre><code>  /**
+   * Sets the batch size of records to read
+   *
+   * @param batch batch size
+   * @return updated CarbonReaderBuilder
+   */
+  public CarbonReaderBuilder withBatch(int batch);
+</code></pre>
 <pre><code>/**
  * To support hadoop configuration
  *
@@ -873,6 +942,15 @@ External client can make use of this reader to read CarbonData files without Car
  */
  public CarbonReaderBuilder withHadoopConf(Configuration conf);
 </code></pre>
+<pre><code>  /**
+   * Updates the hadoop configuration with the given key value
+   *
+   * @param key   key word
+   * @param value value
+   * @return this object
+   */
+  public CarbonReaderBuilder withHadoopConf(String key, String value);
+</code></pre>
 <pre><code> /**
    * Build CarbonReader
    *
@@ -892,6 +970,7 @@ External client can make use of this reader to read CarbonData files without Car
    * @return schema object
    * @throws IOException
    */
+  @Deprecated
   public static Schema readSchemaInSchemaFile(String schemaFilePath);
 </code></pre>
 <pre><code>  /**
@@ -900,6 +979,7 @@ External client can make use of this reader to read CarbonData files without Car
    * @param dataFilePath complete path including carbondata file name
    * @return Schema object
    */
+  @Deprecated
   public static Schema readSchemaInDataFile(String dataFilePath);
 </code></pre>
 <pre><code>  /**
@@ -909,8 +989,42 @@ External client can make use of this reader to read CarbonData files without Car
    * @return schema object
    * @throws IOException
    */
+  @Deprecated
   public static Schema readSchemaInIndexFile(String indexFilePath);
 </code></pre>
+<pre><code>  /**
+   * read schema from path,
+   * path can be folder path,carbonindex file path, and carbondata file path
+   * and will not check all files schema
+   *
+   * @param path file/folder path
+   * @return schema
+   * @throws IOException
+   */
+  public static Schema readSchema(String path);
+</code></pre>
+<pre><code>  /**
+   * read schema from path,
+   * path can be folder path,carbonindex file path, and carbondata file path
+   * and user can decide whether check all files schema
+   *
+   * @param path             file/folder path
+   * @param validateSchema whether check all files schema
+   * @return schema
+   * @throws IOException
+   */
+  public static Schema readSchema(String path, boolean validateSchema);
+</code></pre>
+<pre><code>  /**
+   * This method return the version details in formatted string by reading from carbondata file
+   * If application name is SDK_1.0.0 and this has written the carbondata file in carbondata 1.6 project version,
+   * then this API returns the String "SDK_1.0.0 in version: 1.6.0-SNAPSHOT"
+   * @param dataFilePath complete path including carbondata file name
+   * @return string with information of who has written this file in which carbondata project version
+   * @throws IOException
+   */
+  public static String getVersionDetails(String dataFilePath);
+</code></pre>
 <h3>
 <a id="class-orgapachecarbondatasdkfileschema-1" class="anchor" href="#class-orgapachecarbondatasdkfileschema-1" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Class org.apache.carbondata.sdk.file.Schema</h3>
 <pre><code>  /**

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/security.html
----------------------------------------------------------------------
diff --git a/content/security.html b/content/security.html
index ce1bc30..75a2f65 100644
--- a/content/security.html
+++ b/content/security.html
@@ -45,6 +45,9 @@
                            aria-expanded="false">Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/segment-management-on-carbondata.html
----------------------------------------------------------------------
diff --git a/content/segment-management-on-carbondata.html b/content/segment-management-on-carbondata.html
index 2f04025..dae0d0e 100644
--- a/content/segment-management-on-carbondata.html
+++ b/content/segment-management-on-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/streaming-guide.html
----------------------------------------------------------------------
diff --git a/content/streaming-guide.html b/content/streaming-guide.html
index 7ea58ce..8d8cb82 100644
--- a/content/streaming-guide.html
+++ b/content/streaming-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -243,9 +243,10 @@
 <li>
 <a href="#streaming-job-management">Streaming Job Management</a>
 <ul>
-<li><a href="#start-stream">START STREAM</a></li>
-<li><a href="#stop-stream">STOP STREAM</a></li>
+<li><a href="#create-stream">CREATE STREAM</a></li>
+<li><a href="#drop-stream">DROP STREAM</a></li>
 <li><a href="#show-streams">SHOW STREAMS</a></li>
+<li><a href="#close-stream">CLOSE STREAM</a></li>
 </ul>
 </li>
 </ul>
@@ -570,7 +571,7 @@ streaming table using following DDL.</p>
 
     sql(
       """
-        |START STREAM job123 ON TABLE sink
+        |CREATE STREAM job123 ON TABLE sink
         |STMPROPERTIES(
         |  'trigger'='ProcessingTime',
         |  'interval'='1 seconds')
@@ -580,7 +581,7 @@ streaming table using following DDL.</p>
         |  WHERE id % 2 = 1
       """.stripMargin)
 
-    sql("STOP STREAM job123")
+    sql("DROP STREAM job123")
 
     sql("SHOW STREAMS [ON TABLE tableName]")
 </code></pre>
@@ -591,14 +592,14 @@ streaming table using following DDL.</p>
 <p>As above example shown:</p>
 <ul>
 <li>
-<code>START STREAM jobName ON TABLE tableName</code> is used to start a streaming ingest job.</li>
+<code>CREATE STREAM jobName ON TABLE tableName</code> is used to start a streaming ingest job.</li>
 <li>
-<code>STOP STREAM jobName</code> is used to stop a streaming job by its name</li>
+<code>DROP STREAM jobName</code> is used to stop a streaming job by its name</li>
 <li>
 <code>SHOW STREAMS [ON TABLE tableName]</code> is used to print streaming job information</li>
 </ul>
 <h5>
-<a id="start-stream" class="anchor" href="#start-stream" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>START STREAM</h5>
+<a id="create-stream" class="anchor" href="#create-stream" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CREATE STREAM</h5>
 <p>When this is issued, carbon will start a structured streaming job to do the streaming ingestion. Before launching the job, system will validate:</p>
 <ul>
 <li>
@@ -651,9 +652,24 @@ TBLPROPERTIES(
  <span class="pl-s"><span class="pl-pds">'</span>record_format<span class="pl-pds">'</span></span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">'</span>csv<span class="pl-pds">'</span></span>, <span class="pl-k">//</span> can be csv <span class="pl-k">or</span> json, default is csv
  <span class="pl-s"><span class="pl-pds">'</span>delimiter<span class="pl-pds">'</span></span><span class="pl-k">=</span><span class="pl-s"><span class="pl-pds">'</span>|<span class="pl-pds">'</span></span>
 )</pre></div>
+<ul>
+<li>Then CREATE STREAM can be used to start the streaming ingest job from source table to sink table</li>
+</ul>
+<pre><code>CREATE STREAM job123 ON TABLE sink
+STMPROPERTIES(
+    'trigger'='ProcessingTime',
+     'interval'='10 seconds'
+) 
+AS
+   SELECT *
+   FROM source
+   WHERE id % 2 = 1
+</code></pre>
 <h5>
-<a id="stop-stream" class="anchor" href="#stop-stream" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>STOP STREAM</h5>
-<p>When this is issued, the streaming job will be stopped immediately. It will fail if the jobName specified is not exist.</p>
+<a id="drop-stream" class="anchor" href="#drop-stream" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>DROP STREAM</h5>
+<p>When <code>DROP STREAM</code> is issued, the streaming job will be stopped immediately. It will fail if the jobName specified is not exist.</p>
+<pre><code>DROP STREAM job123
+</code></pre>
 <h5>
 <a id="show-streams" class="anchor" href="#show-streams" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>SHOW STREAMS</h5>
 <p><code>SHOW STREAMS ON TABLE tableName</code> command will print the streaming job information as following</p>
@@ -680,6 +696,10 @@ TBLPROPERTIES(
 </tbody>
 </table>
 <p><code>SHOW STREAMS</code> command will show all stream jobs in the system.</p>
+<h5>
+<a id="alter-table-close-stream" class="anchor" href="#alter-table-close-stream" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>ALTER TABLE CLOSE STREAM</h5>
+<p>When the streaming application is stopped, and user want to manually trigger data conversion from carbon streaming files to columnar files, one can use
+<code>ALTER TABLE sink COMPACT 'CLOSE_STREAMING';</code></p>
 <script>
 $(function() {
   // Show selected style on nav item

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/supported-data-types-in-carbondata.html
----------------------------------------------------------------------
diff --git a/content/supported-data-types-in-carbondata.html b/content/supported-data-types-in-carbondata.html
index 4f584da..d873fab 100644
--- a/content/supported-data-types-in-carbondata.html
+++ b/content/supported-data-types-in-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/timeseries-datamap-guide.html
----------------------------------------------------------------------
diff --git a/content/timeseries-datamap-guide.html b/content/timeseries-datamap-guide.html
index a9137d0..9550fa1 100644
--- a/content/timeseries-datamap-guide.html
+++ b/content/timeseries-datamap-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/usecases.html
----------------------------------------------------------------------
diff --git a/content/usecases.html b/content/usecases.html
index 7e5e07c..b6b4d74 100644
--- a/content/usecases.html
+++ b/content/usecases.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>
@@ -330,12 +330,6 @@
 <td>yarn application directory will be usually on a single disk.YARN would be configured with multiple disks to be used as temp or to assign randomly to applications. Using the yarn temp directory will allow carbon to use multiple disks and improve IO performance</td>
 </tr>
 <tr>
-<td>Data Loading</td>
-<td>carbon.use.multiple.temp.dir</td>
-<td>TRUE</td>
-<td>multiple disks to write sort files will lead to better IO and reduce the IO bottleneck</td>
-</tr>
-<tr>
 <td>Compaction</td>
 <td>carbon.compaction.level.threshold</td>
 <td>6,6</td>
@@ -468,12 +462,6 @@
 </tr>
 <tr>
 <td>Data Loading</td>
-<td>carbon.use.multiple.temp.dir</td>
-<td>TRUE</td>
-<td>multiple disks to write sort files will lead to better IO and reduce the IO bottleneck</td>
-</tr>
-<tr>
-<td>Data Loading</td>
 <td>sort.inmemory.size.in.mb</td>
 <td>92160</td>
 <td>Memory allocated to do inmemory sorting. When more memory is available in the node, configuring this will retain more sort blocks in memory so that the merge sort is faster due to no/very less IO</td>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/content/videogallery.html
----------------------------------------------------------------------
diff --git a/content/videogallery.html b/content/videogallery.html
index d6b9dbe..7b7e66c 100644
--- a/content/videogallery.html
+++ b/content/videogallery.html
@@ -49,6 +49,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -61,9 +64,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/resources/application.conf
----------------------------------------------------------------------
diff --git a/src/main/resources/application.conf b/src/main/resources/application.conf
index 020b127..2f1b695 100644
--- a/src/main/resources/application.conf
+++ b/src/main/resources/application.conf
@@ -17,7 +17,7 @@ fileList=["configuration-parameters",
   "how-to-contribute-to-apache-carbondata",
   "introduction",
   "usecases",
-  "CSDK-guide",
+  "csdk-guide",
   "carbon-as-spark-datasource-guide"
   ]
 dataMapFileList=[

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/scala/html/header.html
----------------------------------------------------------------------
diff --git a/src/main/scala/html/header.html b/src/main/scala/html/header.html
index 52e6681..196736f 100644
--- a/src/main/scala/html/header.html
+++ b/src/main/scala/html/header.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.1/"
+                                   target="_blank">Apache CarbonData 1.5.1</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
                                    target="_blank">Apache CarbonData 1.5.0</a></li>
                             <li>
@@ -64,9 +67,6 @@
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/"
                                    target="_blank">Apache CarbonData 1.3.1</a></li>
                             <li>
-                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/"
-                                   target="_blank">Apache CarbonData 1.3.0</a></li>
-                            <li>
                                 <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"
                                    target="_blank">Release Archive</a></li>
                         </ul>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/scala/scripts/CSDK-guide
----------------------------------------------------------------------
diff --git a/src/main/scala/scripts/CSDK-guide b/src/main/scala/scripts/CSDK-guide
deleted file mode 100644
index 2c8ffc0..0000000
--- a/src/main/scala/scripts/CSDK-guide
+++ /dev/null
@@ -1,11 +0,0 @@
-<script>
-$(function() {
-  // Show selected style on nav item
-  $('.b-nav__api').addClass('selected');
-
-  if (!$('.b-nav__api').parent().hasClass('nav__item__with__subs--expanded')) {
-    // Display api subnav items
-    $('.b-nav__api').parent().toggleClass('nav__item__with__subs--expanded');
-  }
-});
-</script>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/ae77df2e/src/main/scala/scripts/csdk-guide
----------------------------------------------------------------------
diff --git a/src/main/scala/scripts/csdk-guide b/src/main/scala/scripts/csdk-guide
new file mode 100644
index 0000000..2c8ffc0
--- /dev/null
+++ b/src/main/scala/scripts/csdk-guide
@@ -0,0 +1,11 @@
+<script>
+$(function() {
+  // Show selected style on nav item
+  $('.b-nav__api').addClass('selected');
+
+  if (!$('.b-nav__api').parent().hasClass('nav__item__with__subs--expanded')) {
+    // Display api subnav items
+    $('.b-nav__api').parent().toggleClass('nav__item__with__subs--expanded');
+  }
+});
+</script>
\ No newline at end of file