You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@carbondata.apache.org by ch...@apache.org on 2017/02/04 02:38:05 UTC

[06/35] incubator-carbondata-site git commit: Updated website for CarbonData release 1.0.0

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/Contribution.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/Contribution.html b/src/main/webapp/docs/latest/Contribution.html
deleted file mode 100644
index f39aab1..0000000
--- a/src/main/webapp/docs/latest/Contribution.html
+++ /dev/null
@@ -1,188 +0,0 @@
-<!DOCTYPE html><html><head><meta charset="utf-8"><title>Untitled Document.md</title><style>
-
-</style></head><body id="preview">
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-\u201cLicense\u201d); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-&quot;AS IS&quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<h1><div id="Contributing_To_CarbonData"></div>Contributing to CarbonData</h1>
-<hr>
-<p>
-The Apache CarbonData community welcomes contributions from anyone with a passion for faster data format!
-Apache CarbonData is a new file format for faster interactive query using advanced columnar storage, index, compression
-and encoding techniques to improve computing efficiency, which helps in speeding up queries by an order of magnitude faster over PetaBytes of data.
-<br />
-We follow a review-then-commit workflow in CarbonData for all contributions.
-</p>
-<strong><i><h5>Engage -> Design -> Code -> Review -> Commit</h5></i></strong>
-
-<h2>Getting engaged with CarbonData</h2>
-<hr />
-<ul>
-  <li>  <div id="Mailing_List"><h4>Mailing List(s)</h4></div>
-    <ul>
-      <li>
-        You can ask questions or start a discussion with the Community on the dev mailing website forum at :<br />
-        <a href="http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com">http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com</a>
-      </li>
-      <li>
-       The design and Implementation issues can be discussed on : <br />
-       <a href="dev@carbondata.incubator.apache.org">dev@carbondata.incubator.apache.org</a>
-        </li>
-        <li>
-       You can join the community by sending an email at : <br />
-       <a href="dev-subscribe@carbondata.incubator.apache.org">dev-subscribe@carbondata.incubator.apache.org</a>
-      </li>
-      <br />
-    </ul>
-  </li>
-  <li><div id="Apache_Jira"><h4>Apache JIRA</h4></div>
-    <ul>
-    We use <a href="https://issues.apache.org/jira/browse/CARBONDATA/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel">Apache JIRA</a>
-      as an issue tracking and project management tool, as well as a way to communicate among a very diverse and distributed set of contributors. To be able to gather feedback, avoid frustration, and avoid duplicated efforts all CarbonData-related work should be tracked there. </li>
-If you do not have an Apache JIRA account, <a href="https://issues.apache.org/jira/secure/Dashboard.jspa">sign up</a> here.
-<br />
-If a quick search doesn\u2019t turn up an existing JIRA issue for the work you want to contribute, create it. Please discuss your proposal with a committer or the component lead in JIRA or, alternatively,
-on the developer mailing list <a href="dev@carbondata.incubator.apache.org">(dev@carbondata.incubator.apache.org)</a>.
-<br />
-If there\u2019s an existing JIRA issue for your intended contribution, please comment about your intended work. Once the work is understood, a committer will assign the issue to you. (If you don\u2019t have a JIRA role yet, you\u2019ll be added to the \u201ccontributor\u201d role.) If an issue is currently assigned, please check with the current assignee before reassigning.
-<br />
-For moderate or large contributions, you should not start coding or writing a design doc unless there is a corresponding JIRA issue assigned to you for that work. Simple changes, like fixing typos, do not require an associated issue.
-    </ul>
-</ul>
-
-<h2>Designing for CarbonData</h2>
-<hr />
-To avoid potential frustration during the code review cycle, we encourage you to clearly scope and design non-trivial contributions with the CarbonData community before you start coding.
-Generally, the JIRA issue is the best place to gather relevant design docs, comments, or references. It\u2019s great to explicitly include relevant stakeholders early in the conversation. For designs that may be generally interesting, we also encourage conversations on the developer\u2019s mailing list.
-<br />
-
-<h2>Coding for CarbonData</h2>
-<hr />
-We use GitHub\u2019s pull request functionality to review proposed code changes. If you do not have a personal GitHub account, sign up <a href="https://github.com/">here</a>.
-<ul>
-  <li>
-    <h4><a href="https://github.com/apache/incubator-carbondata/blob/master/docs/How-to-contribute-to-Apache-CarbonData.html#fork-the-repository-on-github">Fork the repository on GitHub</a></h4>
-    Go to the <a href="https://github.com/apache/incubator-carbondata">Apache CarbonData GitHub mirror</a>  and fork the repository to your own private account. This will be your private workspace for staging changes.
-  </li>
-  <li>
-    <h4><a href="https://github.com/apache/incubator-carbondata/blob/master/docs/How-to-contribute-to-Apache-CarbonData.html#clone-the-repository-locally">Clone the repository locally</a></h4>
-    You are now ready to create the development environment on your local machine. Clone CarbonData\u2019s read-only GitHub mirror.<br />
-    <strong><i>$ git clone https://github.com/apache/incubator-carbondata.git<br />
-    $ cd incubator-carbondata</i></strong><br />
-    Add your forked repository as an additional Git remote, where you\u2019ll push your changes.<br />
-    <strong><i>$ git remote add <GitHub_user> https://github.com/<GitHub_user>/incubator-carbondata.git</i></strong>
-      <br />
-    You are now ready to start developing!
-  </li>
-  <li>
-    <h4><a href="https://github.com/apache/incubator-carbondata/blob/master/docs/How-to-contribute-to-Apache-CarbonData.html#create-a-branch-in-your-fork">Create a branch in your fork</a></h4>
-    You\u2019ll work on your contribution in a branch in your own (forked) repository. Create a local branch, initialized with the state of the branch you expect your changes to be merged into. Keep in mind that we use several branches, including master, feature-specific, and release-specific branches. If you are unsure, initialize with the state of the master branch.
-    <br /><strong><i>$ git fetch --all<br />
-$ git checkout -b <my-branch> origin/master
-</i></strong><br />
-    At this point, you can start making and committing changes to this branch in a standard way.
-  </li>
-  <li>
-    <h4><a href="https://github.com/apache/incubator-carbondata/blob/master/docs/How-to-contribute-to-Apache-CarbonData.html#syncing-and-pushing-your-branch">Syncing and pushing your branch</a></h4>
-    Periodically while you work, and certainly before submitting a pull request, you should update your branch with the most recent changes to the target branch.<br />
-    <strong><i>$ git pull --rebase</i></strong><br />
-    Remember to always use --rebase parameter to avoid extraneous merge commits.<br />
-    To push your local committed changes to your (forked) repository on GitHub, run:<br />
-<strong><i>$ git push <GitHub_user> <my-branch></i></strong>
-  </li>
-  <li>
-    <h4><a href="https://github.com/apache/incubator-carbondata/blob/master/docs/How-to-contribute-to-Apache-CarbonData.html#testing">Testing</a></h4>
-    All code should have appropriate unit testing coverage. New code should have new tests in the same contribution. Bug fixes should include a regression test to prevent the issue from reoccurring.
-    For contributions to the Java code, run unit tests locally via Maven.<br />
-    <strong><i>$ mvn clean verify
-    </i></strong>
-  </li>
-</ul>
-<h2>Review Process for CarbonData</h2>
-<hr />
-Once the initial code is complete and the tests pass, it\u2019s time to start the code review process. We review and discuss all code, no matter who authors it. It\u2019s a great way to build community, since you can learn from other developers, and they become familiar with your contribution. It also builds a strong project by encouraging a high quality bar and keeping code consistent throughout the project.
-<ul>
-  <li>
-    <h4>Create a pull request</h4>
-    Organize your commits to make your reviewer\u2019s job easier. Use the following command to re-order, squash, edit, or change description of individual commits.<br />
-    <strong><i>$ git rebase -i origin/master</i></strong><br />
-    Navigate to the CarbonData GitHub mirror to create a pull request. The title of the pull request should be strictly in the following format:<br />
-    <strong><i>[CARBONDATA-issue number>] Title of the pull request</i></strong><br />
-    Please include a descriptive pull request message to help make the reviewer\u2019s job easier.
-
-If you know a good committer to review your pull request, please make a comment like the following. If not, don\u2019t worry, a committer will pick it up.<br />
-  <strong><i>Hi @<committer/reviewer name>, can you please take a look?</i></strong><br />
-  </li>
-  <li>
-    <h4>Code Review and Revision</h4>
-    During the code review process, don\u2019t rebase your branch or otherwise modify published commits, since this can remove existing comment history and be confusing to the reviewer. When you make a revision, always push it in a new commit.<br />
-
-Our GitHub mirror automatically provides pre-commit testing coverage using Jenkins. Please make sure those tests pass,the contribution cannot be merged otherwise.
-  </li>
-</ul>
-
-<h2>LGTM</h2>
-<hr />
-Once the reviewer is happy with the change, they\u2019ll respond with an LGTM (\u201clooks good to me!\u201d). At this point, the committer will take over, possibly make some additional touch ups, and merge your changes into the codebase.
-In the case both the author and the reviewer are committers, either can merge the pull request. Just be sure to communicate clearly whose responsibility it is in this particular case.
-Thank you for your contribution to Apache CarbonData!
-
-<h2>Deleting your Branch(Optional)</h2>
-<hr />
-Once the pull request is merged into the Apache CarbonData repository, you can safely delete the branch locally and purge it from your forked repository, from another local branch, run:<br />
-<strong><i>
-  $ git fetch --all
-  <br />
-  $ git branch -d <my-branch>
-  <br />
-    $ git push <GitHub_user> --delete <my-branch>
-  <br />
-</i></strong>
-<div id="Committers"></div>
-  <h2>Committers</h2>
-  <ul>
-    <li>Current Committers
-      <p>To see the committers  <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Committers" target="blank">Click Here</a> </p>
-    </li>
-</li>
-    <li>Becoming a Committer
-      <p>To start contributing to Apache CarbonData, refer to '<a href="#Contributing_To_CarbonData">How to contribute</a>'. </p>
-      <p> Anyone can  contribute in the code or documentation of the project. The PMC regularly adds new committers from the active contributors, based on their contributions to Apache CarbonData.</p>
-    </li>
-</ul>
-<!--
-<center>
-<b><a href="#top">Top</a></b>
-</center>
--->
-
-<script type="text/javascript">
- $('a[href*="#"]:not([href="#"])').click(function() {
-   if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) {
-    var target = $(this.hash);
-    target = target.length ? target : $('[name=' + this.hash.slice(1) + ']');
-    if (target.length) 
-        { $('html, body').animate({ scrollTop: target.offset().top - 52 },100);
-          return false;
-        }
-     }
-  });
-</script>
-</body>
-
-</html>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/configuration-parameters.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/configuration-parameters.html b/src/main/webapp/docs/latest/configuration-parameters.html
index 6b55200..1e893ae 100644
--- a/src/main/webapp/docs/latest/configuration-parameters.html
+++ b/src/main/webapp/docs/latest/configuration-parameters.html
@@ -21,7 +21,7 @@
   <li><a href="#performance-configuration">Performance Configuration</a></li>
   <li><a href="#miscellaneous-configuration">Miscellaneous Configuration</a></li>
   <li><a href="#spark-configuration">Spark Configuration</a></li>
-</ul><h2>System Configuration</h2><p>This section provides the details of all the configurations required for the CarbonData System.</p><p><b><p align="center">System Configuration in carbon.properties</p></b></p>
+</ul><h2 id="system-configuration">System Configuration</h2><p>This section provides the details of all the configurations required for the CarbonData System.</p><p><b><p align="center">System Configuration in carbon.properties</p></b></p>
 <table class="table table-striped table-bordered">
   <thead>
     <tr>
@@ -39,7 +39,7 @@
     <tr>
       <td>carbon.ddl.base.hdfs.url </td>
       <td>hdfs://hacluster/opt/data </td>
-      <td>This property is used to configure the HDFS relative path from the HDFS base path, configured in fs.defaultFS. The path configured in carbon.ddl.base.hdfs.url will be appended to the HDFS path configured in fs.defaultFS. If this path is configured, then user need not pass the complete path while dataload. For example: If absolute path of the csv file is hdfs://10.18.101.155:54310/data/cnbc/2016/xyz.csv, the path "hdfs://10.18.101.155:54310" will come from property fs.defaultFS and user can configure the /data/cnbc/ as carbon.ddl.base.hdfs.url. Now while dataload user can specify the csv path as /2016/xyz.csv. </td>
+      <td>This property is used to configure the HDFS relative path, the path configured in carbon.ddl.base.hdfs.url will be appended to the HDFS path configured in fs.defaultFS. If this path is configured, then user need not pass the complete path while dataload. For example: If absolute path of the csv file is hdfs://10.18.101.155:54310/data/cnbc/2016/xyz.csv, the path "hdfs://10.18.101.155:54310" will come from property fs.defaultFS and user can configure the /data/cnbc/ as carbon.ddl.base.hdfs.url. Now while dataload user can specify the csv path as /2016/xyz.csv. </td>
     </tr>
     <tr>
       <td>carbon.badRecords.location </td>
@@ -49,15 +49,15 @@
     <tr>
       <td>carbon.kettle.home </td>
       <td>$SPARK_HOME/carbonlib/carbonplugins </td>
-      <td>Path used by CarbonData internally to create graph for loading the data. </td>
+      <td>Configuration for loading the data with kettle. </td>
     </tr>
     <tr>
       <td>carbon.data.file.version </td>
       <td>2 </td>
-      <td>If this parameter value is set to 1, then CarbonData will support the data load which is in old format. If the value is set to 2, then CarbonData will support the data load of new format only. NOTE: The file format created before DataSight Spark V100R002C30 is considered as old format. </td>
+      <td>If this parameter value is set to 1, then CarbonData will support the data load which is in old format(0.x version). If the value is set to 2(1.x onwards version), then CarbonData will support the data load of new format only. </td>
     </tr>
   </tbody>
-</table><h2>Performance Configuration</h2><p>This section provides the details of all the configurations required for CarbonData Performance Optimization.</p><p><b><p align="center">Performance Configuration in carbon.properties</p></b></p>
+</table><h2 id="performance-configuration">Performance Configuration</h2><p>This section provides the details of all the configurations required for CarbonData Performance Optimization.</p><p><b><p align="center">Performance Configuration in carbon.properties</p></b></p>
 <ul>
   <li><strong>Data Loading Configuration</strong></li>
 </ul>
@@ -222,7 +222,7 @@
       <td> </td>
     </tr>
   </tbody>
-</table><h2>Miscellaneous Configuration</h2><p><b><p align="center">Extra Configuration in carbon.properties</p></b></p>
+</table><h2 id="miscellaneous-configuration">Miscellaneous Configuration</h2><p><b><p align="center">Extra Configuration in carbon.properties</p></b></p>
 <ul>
   <li><strong>Time format for CarbonData</strong></li>
 </ul>
@@ -384,7 +384,7 @@
     <tr>
       <td>high.cardinality.threshold </td>
       <td>1000000 </td>
-      <td>Threshold to identify whether high cardinality column.Configuration value formula: Value of cardinality &gt; configured value of high.cardinality. The minimum value is 10000. </td>
+      <td>Threshold to identify whether high cardinality column.if columns of cardinality > the configured value, then the columns are excluded from dictionary encoding. </td>
     </tr>
     <tr>
       <td>high.cardinality.row.count.percentage </td>
@@ -402,7 +402,7 @@
       <td>The property used to set the data granularity level DAY, HOUR, MINUTE, or SECOND. </td>
     </tr>
   </tbody>
-</table><h2>Spark Configuration</h2><p><b><p align="center">Spark Configuration Reference in spark-defaults.conf</p></b></p>
+</table><h2 id="spark-configuration">Spark Configuration</h2><p><b><p align="center">Spark Configuration Reference in spark-defaults.conf</p></b></p>
 <table class="table table-striped table-bordered">
   <thead>
     <tr>
@@ -415,12 +415,12 @@
     <tr>
       <td>spark.driver.memory </td>
       <td>1g </td>
-      <td>Amount of memory to use for the driver process, i.e. where SparkContext is initialized. NOTE: In client mode, this config must not be set through the SparkConf directly in your application, because the driver JVM has already started at that point. Instead, please set this through the --driver-memory command line option or in your default properties file. </td>
+      <td>Amount of memory to be used by the driver process. </td>
     </tr>
     <tr>
       <td>spark.executor.memory </td>
       <td>1g </td>
-      <td>Amount of memory to use per executor process. </td>
+      <td>Amount of memory to be used per executor process. </td>
     </tr>
     <tr>
       <td>spark.sql.bigdata.register.analyseRule </td>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/data-management.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/data-management.html b/src/main/webapp/docs/latest/data-management.html
index b756271..d2758b8 100644
--- a/src/main/webapp/docs/latest/data-management.html
+++ b/src/main/webapp/docs/latest/data-management.html
@@ -15,16 +15,18 @@
     KIND, either express or implied.  See the License for the
     specific language governing permissions and limitations
     under the License.
---><h1>Data Management</h1><p>This tutorial is going to introduce you to the conceptual details of data management like:</p>
+-->
+<!--<script src="../../js/mdNavigation.js" type="text/javascript"></script>-->
+<h1>Data Management</h1><p>This tutorial is going to introduce you to the conceptual details of data management like:</p>
 <ul>
   <li><a href="#loading-data">Loading Data</a></li>
   <li><a href="#deleting-data">Deleting Data</a></li>
   <li><a href="#compacting-data">Compacting Data</a></li>
   <li><a href="#updating-data">Updating Data</a></li>
-</ul><h2>Loading Data</h2>
+</ul><h2 id="loading-data">Loading Data</h2>
 <ul>
   <li><strong>Scenario</strong></li>
-</ul><p>After creating a table, you can load data to the table using the <a href="dml-operation-on-carbondata.html">LOAD DATA</a> command. The loaded data is available for querying.  When data load is triggered, the data is encoded in CarbonData format and copied into HDFS CarbonData store path (specified in carbon.properties file)  in compressed, multi dimensional columnar format for quick analysis queries. The same command can be used to load new data or to  update the existing data. Only one data load can be triggered for one table. The high cardinality columns of the dictionary encoding are  automatically recognized and these columns will not be used for dictionary encoding.</p>
+</ul><p>After creating a table, you can load data to the table using the <a href="mainpage.html?page=dml">LOAD DATA</a> command. The loaded data is available for querying.  When data load is triggered, the data is encoded in CarbonData format and copied into HDFS CarbonData store path (specified in carbon.properties file)  in compressed, multi dimensional columnar format for quick analysis queries. The same command can be used to load new data or to  update the existing data. Only one data load can be triggered for one table. The high cardinality columns of the dictionary encoding are  automatically recognized and these columns will not be used for dictionary encoding.</p>
 <ul>
   <li><strong>Procedure</strong></li>
 </ul><p>Data loading is a process that involves execution of multiple steps to read, sort and encode the data in CarbonData store format.  Each step is executed on different threads. After data loading process is complete, the status (success/partial success) is updated to  CarbonData store metadata. The table below lists the possible load status.</p>
@@ -45,13 +47,14 @@
       <td>Data is loaded into table and bad records are found. Bad records are stored at carbon.badrecords.location. </td>
     </tr>
   </tbody>
-</table><p>In case of failure, the error will be logged in error log. Details of loads can be seen with <a href="dml-operation-on-carbondata.html">SHOW SEGMENTS</a> command. The show segment command output consists of :</p>
+</table><p>In case of failure, the error will be logged in error log. Details of loads can be seen with <a href="mainpage.html?page=dml">SHOW SEGMENTS</a> command. The show segment command output consists of :</p>
 <ul>
   <li>SegmentSequenceID</li>
   <li>START_TIME OF LOAD</li>
   <li>END_TIME OF LOAD</li>
   <li>LOAD STATUS</li>
-</ul><p>The latest load will be displayed first in the output.</p><p>Refer to <a href="dml-operation-on-carbondata.html">DML operations on CarbonData</a> for load commands.</p><h2>Deleting Data</h2>
+</ul><p>The latest load will be displayed first in the output.</p><p>Refer to <a href="mainpage.html?page=dml">DML operations on CarbonData</a> for load commands.</p>
+<h2 id="deleting-data">Deleting Data</h2>
 <ul>
   <li><strong>Scenario</strong></li>
 </ul><p>If you have loaded wrong data into the table, or too many bad records are present and you want to modify and reload the data, you can delete required data loads.  The load can be deleted using the Segment Sequence Id or if the table contains date field then the data can be deleted using the date field.  If there are some specific records that need to be deleted based on some filter condition(s) we can delete by records.</p>
@@ -59,7 +62,7 @@
   <li><strong>Procedure</strong></li>
 </ul><p>The loaded data can be deleted in the following ways:</p>
 <ul>
-  <li><p>Delete by Segment ID</p><p>After you get the segment ID of the segment that you want to delete, execute the <a href="dml-operation-on-carbondata.html">DELETE</a> command for the selected segment.  The status of deleted segment is updated to Marked for delete / Marked for Update.</p></li>
+  <li><p>Delete by Segment ID</p><p>After you get the segment ID of the segment that you want to delete, execute the DELETE command for the selected segment.  The status of deleted segment is updated to Marked for delete / Marked for Update.</p></li>
 </ul>
 <table class="table table-striped table-bordered">
   <thead>
@@ -93,7 +96,7 @@
 </table>
 <ul>
   <li><p>Delete by Date Field</p><p>If the table contains date field, you can delete the data based on a specific date.</p></li>
-  <li><p>Delete by Record</p><p>To delete records from CarbonData table based on some filter Condition(s).</p><p>For delete commands refer to <a href="dml-operation-on-carbondata.html">DML operations on CarbonData</a>.</p></li>
+  <li><p>Delete by Record</p><p>To delete records from CarbonData table based on some filter Condition(s).</p><p>For delete commands refer to <a href="mainpage.html?page=dml">DML operations on CarbonData</a>.</p></li>
   <li><p><strong>NOTE</strong>:</p>
   <ul>
     <li>When the delete segment DML is called, segment will not be deleted physically from the file system. Instead the segment status will be marked as "Marked for Delete". For the query execution, this deleted segment will be excluded.</li>
@@ -106,7 +109,8 @@
   </ul></li>
 </ul><p>Example :</p><p><code>
 CLEAN FILES FOR TABLE table1
-</code></p><p>This DML will physically delete the segment which are "Marked for delete" immediately.</p><h2>Compacting Data</h2>
+</code></p><p>This DML will physically delete the segment which are "Marked for delete" immediately.</p>
+<h2 id="compacting-data">Compacting Data</h2>
 <ul>
   <li><strong>Scenario</strong></li>
 </ul><p>Frequent data ingestion results in several fragmented CarbonData files in the store directory. Since data is sorted only within each load, the indices perform only within each  load. This means that there will be one index for each load and as number of data load increases, the number of indices also increases. As each index works only on one load,  the performance of indices is reduced. CarbonData provides provision for compacting the loads. Compaction process combines several segments into one large segment by merge sorting the data from across the segments. </p>
@@ -168,8 +172,9 @@ CLEAN FILES FOR TABLE table1
       <td>0-100 </td>
     </tr>
   </tbody>
-</table><p>For compaction commands refer to <a href="ddl-operation-on-carbondata.html">DDL operations on CarbonData</a></p><h2>Updating Data</h2>
+</table><p>For compaction commands refer to <a href="mainpage.html?page=dml">DDL operations on CarbonData</a></p>
+<h2 id="updating-data">Updating Data</h2>
 <ul>
   <li><p><strong>Scenario</strong></p><p>Sometimes after the data has been ingested into the System, it is required to be updated. Also there may be situations where some specific columns need to be updated on the basis of column expression and optional filter conditions.</p></li>
-  <li><p><strong>Procedure</strong></p><p>To update we need to specify the column expression with an optional filter condition(s).</p><p>For update commands refer to <a href="dml-operation-on-carbondata.html">DML operations on CarbonData</a>.</p></li>
+  <li><p><strong>Procedure</strong></p><p>To update we need to specify the column expression with an optional filter condition(s).</p><p>For update commands refer to <a href="mainpage.html?page=dml">DML operations on CarbonData</a>.</p></li>
 </ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/ddl-operation-on-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/ddl-operation-on-carbondata.html b/src/main/webapp/docs/latest/ddl-operation-on-carbondata.html
index 7886740..3a5b4b5 100644
--- a/src/main/webapp/docs/latest/ddl-operation-on-carbondata.html
+++ b/src/main/webapp/docs/latest/ddl-operation-on-carbondata.html
@@ -15,172 +15,233 @@
     KIND, either express or implied.  See the License for the
     specific language governing permissions and limitations
     under the License.
---><h1>DDL Operations on CarbonData</h1><p>This tutorial guides you through the data definition language support provided by CarbonData.</p><h2>Overview</h2><p>The following DDL operations are supported in CarbonData :</p>
+-->
+<h1>DDL Operations on CarbonData</h1><p>This tutorial guides you through the data definition language support provided by CarbonData.</p><h2>Overview</h2><p>The following DDL operations are supported in CarbonData :</p>
 <ul>
   <li><a href="#create-table">CREATE TABLE</a></li>
   <li><a href="#show-table">SHOW TABLE</a></li>
   <li><a href="#drop-table">DROP TABLE</a></li>
   <li><a href="#compaction">COMPACTION</a></li>
-</ul><h2>CREATE TABLE</h2><p>This command can be used to create a CarbonData table by specifying the list of fields along with the table properties.</p><p><code>
-   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
-                    [(col_name data_type , ...)]               
-   STORED BY &#39;carbondata&#39;
-   [TBLPROPERTIES (property_name=property_value, ...)]
-   // All Carbon&#39;s additional table options will go into properties
-</code></p><h3>Parameter Description</h3>
+  <li><a href="#bucketing">BUCKETING</a></li>
+</ul><h2 id="create-table">CREATE TABLE</h2><p>This command can be used to create a CarbonData table by specifying the list of fields along with the table properties.</p><p><pre><code>
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+  [(col_name data_type, ...)]
+  STORED BY &#39;carbondata&#39;
+  [TBLPROPERTIES (property_name=property_value, ...)]
+  // All Carbon&#39;s additional table options will go into properties
+</code></pre></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
   <thead>
-    <tr>
-      <th>Parameter </th>
-      <th>Description </th>
-      <th>Optional </th>
-    </tr>
+  <tr>
+    <th>Parameter </th>
+    <th>Description </th>
+    <th>Optional </th>
+  </tr>
   </thead>
   <tbody>
-    <tr>
-      <td>db_name </td>
-      <td>Name of the database. Database name should consist of alphanumeric characters and underscore(_) special character. </td>
-      <td>Yes </td>
-    </tr>
-    <tr>
-      <td>field_list </td>
-      <td>Comma separated List of fields with data type. The field names should consist of alphanumeric characters and underscore(_) special character. </td>
-      <td>No </td>
-    </tr>
-    <tr>
-      <td>table_name </td>
-      <td>The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. </td>
-      <td>No </td>
-    </tr>
-    <tr>
-      <td>STORED BY </td>
-      <td>"org.apache.carbondata.format", identifies and creates a CarbonData table. </td>
-      <td>No </td>
-    </tr>
-    <tr>
-      <td>TBLPROPERTIES </td>
-      <td>List of CarbonData table properties. </td>
-      <td> </td>
-    </tr>
+  <tr>
+    <td>db_name </td>
+    <td>Name of the database. Database name should consist of alphanumeric characters and underscore(_) special character. </td>
+    <td>Yes </td>
+  </tr>
+  <tr>
+    <td>field_list </td>
+    <td>Comma separated List of fields with data type. The field names should consist of alphanumeric characters and underscore(_) special character. </td>
+    <td>No </td>
+  </tr>
+  <tr>
+    <td>table_name </td>
+    <td>The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. </td>
+    <td>No </td>
+  </tr>
+  <tr>
+    <td>STORED BY </td>
+    <td>"org.apache.carbondata.format", identifies and creates a CarbonData table. </td>
+    <td>No </td>
+  </tr>
+  <tr>
+    <td>TBLPROPERTIES </td>
+    <td>List of CarbonData table properties. </td>
+    <td> </td>
+  </tr>
   </tbody>
 </table><h3>Usage Guidelines</h3><p>Following are the guidelines for using table properties.</p>
 <ul>
   <li><p><strong>Dictionary Encoding Configuration</strong></p><p>Dictionary encoding is enabled by default for all String columns, and disabled for non-String columns. You can include and exclude columns for dictionary encoding.</p></li>
 </ul><p><code>
-       TBLPROPERTIES (&quot;DICTIONARY_EXCLUDE&quot;=&quot;column1, column2&quot;) 
-       TBLPROPERTIES (&quot;DICTIONARY_INCLUDE&quot;=&quot;column1, column2&quot;) 
+  TBLPROPERTIES (&quot;DICTIONARY_EXCLUDE&quot;=&quot;column1, column2&quot;)
+  TBLPROPERTIES (&quot;DICTIONARY_INCLUDE&quot;=&quot;column1, column2&quot;)
 </code></p><p>Here, DICTIONARY_EXCLUDE will exclude dictionary creation. This is applicable for high-cardinality columns and is an optional parameter. DICTIONARY_INCLUDE will generate dictionary for the columns specified in the list.</p>
 <ul>
   <li><p><strong>Row/Column Format Configuration</strong></p><p>Column groups with more than one column are stored in row format, instead of columnar format. By default, each column is a separate column group.</p></li>
 </ul><p><code>
-TBLPROPERTIES (&quot;COLUMN_GROUPS&quot;=&quot;(column1,column3),
-(Column4,Column5,Column6)&quot;) 
+  TBLPROPERTIES (&quot;COLUMN_GROUPS&quot;=&quot;(column1, column3),
+  (Column4,Column5,Column6)&quot;)
 </code></p>
 <ul>
-  <li><p><strong>Table Block Size Configuration</strong></p><p>The block size of table files can be defined using the property TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB and supports a range of 1 MB to 2048 MB.  If you do not specify this value in the DDL command , default value is used. </p></li>
+  <li><p><strong>Table Block Size Configuration</strong></p><p>The block size of table files can be defined using the property TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB and supports a range of 1 MB to 2048 MB.  If you do not specify this value in the DDL command, default value is used.</p></li>
 </ul><p><code>
-       TBLPROPERTIES (&quot;TABLE_BLOCKSIZE&quot;=&quot;512 MB&quot;)
+  TBLPROPERTIES (&quot;TABLE_BLOCKSIZE&quot;=&quot;512 MB&quot;)
 </code></p><p>Here 512 MB means the block size of this table is 512 MB, you can also set it as 512M or 512.</p>
 <ul>
   <li><p><strong>Inverted Index Configuration</strong></p><p>Inverted index is very useful to improve compression ratio and query speed, especially for those low-cardinality columns who are in reward position.  By default inverted index is enabled. The user can disable the inverted index creation for some columns.</p></li>
 </ul><p><code>
-       TBLPROPERTIES (&quot;NO_INVERTED_INDEX&quot;=&quot;column1,column3&quot;)
+  TBLPROPERTIES (&quot;NO_INVERTED_INDEX&quot;=&quot;column1, column3&quot;)
 </code></p><p>No inverted index shall be generated for the columns specified in NO_INVERTED_INDEX. This property is applicable on columns with high-cardinality and is an optional parameter.</p><p>NOTE:</p>
 <ul>
   <li><p>By default all columns other than numeric datatype are treated as dimensions and all columns of numeric datatype are treated as measures.</p></li>
-  <li><p>All dimensions except complex datatype columns are part of multi dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. If the user wants to keep any column (except columns of complex datatype) in multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE or DICTIONARY_INCLUDE.</p><h3>Example:</h3><p><code>
-   CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-                            productNumber Int,
-                            productName String, 
-                            storeCity String, 
-                            storeProvince String, 
-                            productCategory String, 
-                            productBatch String,
-                            saleQuantity Int,
-                            revenue Int)       
-   STORED BY &#39;carbondata&#39; 
-   TBLPROPERTIES (&#39;COLUMN_GROUPS&#39;=&#39;(productName,productCategory)&#39;,
-              &#39;DICTIONARY_EXCLUDE&#39;=&#39;productName&#39;,
-              &#39;DICTIONARY_INCLUDE&#39;=&#39;productNumber&#39;,
-              &#39;NO_INVERTED_INDEX&#39;=&#39;productBatch&#39;)
-</code></p></li>
-</ul><h2>SHOW TABLE</h2><p>This command can be used to list all the tables in current database or all the tables of a specific database. <code>
+  <li><p>All dimensions except complex datatype columns are part of multi dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. If the user wants to keep any column (except columns of complex datatype) in multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE or DICTIONARY_INCLUDE.</p><h3>Example:</h3>
+    <p><pre><code>
+    CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                                                                productNumber Int,
+                                                                productName String,
+                                                                storeCity String,
+                                                                storeProvince String,
+                                                                productCategory String,
+                                                                productBatch String,
+                                                                saleQuantity Int,
+                                                                revenue Int)
+                                                                STORED BY &#39;carbondata&#39;
+                                                                TBLPROPERTIES (&#39;COLUMN_GROUPS&#39;=&#39;(productName,productCategory)&#39;,
+                                                                &#39;DICTIONARY_EXCLUDE&#39;=&#39;productName&#39;,
+                                                                &#39;DICTIONARY_INCLUDE&#39;=&#39;productNumber&#39;,
+                                                                &#39;NO_INVERTED_INDEX&#39;=&#39;productBatch&#39;)
+  </code></pre></p></li>
+</ul><h2 id="show-table">SHOW TABLE</h2><p>This command can be used to list all the tables in current database or all the tables of a specific database. <code>
   SHOW TABLES [IN db_Name];
 </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
   <thead>
-    <tr>
-      <th>Parameter </th>
-      <th>Description </th>
-      <th>Optional </th>
-    </tr>
+  <tr>
+    <th>Parameter </th>
+    <th>Description </th>
+    <th>Optional </th>
+  </tr>
   </thead>
   <tbody>
-    <tr>
-      <td>IN db_Name </td>
-      <td>Name of the database. Required only if tables of this specific database are to be listed. </td>
-      <td>Yes </td>
-    </tr>
+  <tr>
+    <td>IN db_Name </td>
+    <td>Name of the database. Required only if tables of this specific database are to be listed. </td>
+    <td>Yes </td>
+  </tr>
   </tbody>
 </table><h3>Example:</h3><p><code>
   SHOW TABLES IN ProductSchema;
-</code></p><h2>DROP TABLE</h2><p>This command is used to delete an existing table.</p><p><code>
+</code></p><h2 id="drop-table">DROP TABLE</h2><p>This command is used to delete an existing table.</p><p><code>
   DROP TABLE [IF EXISTS] [db_name.]table_name;
 </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
   <thead>
-    <tr>
-      <th>Parameter </th>
-      <th>Description </th>
-      <th>Optional </th>
-    </tr>
+  <tr>
+    <th>Parameter </th>
+    <th>Description </th>
+    <th>Optional </th>
+  </tr>
   </thead>
   <tbody>
-    <tr>
-      <td>db_Name </td>
-      <td>Name of the database. If not specified, current database will be selected. </td>
-      <td>YES </td>
-    </tr>
-    <tr>
-      <td>table_name </td>
-      <td>Name of the table to be deleted. </td>
-      <td>NO </td>
-    </tr>
+  <tr>
+    <td>db_Name </td>
+    <td>Name of the database. If not specified, current database will be selected. </td>
+    <td>YES </td>
+  </tr>
+  <tr>
+    <td>table_name </td>
+    <td>Name of the table to be deleted. </td>
+    <td>NO </td>
+  </tr>
   </tbody>
 </table><h3>Example:</h3><p><code>
   DROP TABLE IF EXISTS productSchema.productSalesTable;
-</code></p><h2>COMPACTION</h2><p>This command merges the specified number of segments into one segment. This enhances the query performance of the table.</p><p><code>
+</code></p><h2 id="compaction">COMPACTION</h2><p>This command merges the specified number of segments into one segment. This enhances the query performance of the table.</p><p><code>
   ALTER TABLE [db_name.]table_name COMPACT &#39;MINOR/MAJOR&#39;;
-</code></p><p>To get details about Compaction refer to <a href="data-management.html">Data Management</a></p><h3>Parameter Description</h3>
+</code></p><p>To get details about Compaction refer to Data Management</p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
   <thead>
-    <tr>
-      <th>Parameter </th>
-      <th>Description </th>
-      <th>Optional </th>
-    </tr>
+  <tr>
+    <th>Parameter </th>
+    <th>Description </th>
+    <th>Optional </th>
+  </tr>
   </thead>
   <tbody>
-    <tr>
-      <td>db_name </td>
-      <td>Database name, if it is not specified then it uses current database. </td>
-      <td>YES </td>
-    </tr>
-    <tr>
-      <td>table_name </td>
-      <td>The name of the table in provided database.</td>
-      <td>NO </td>
-    </tr>
+  <tr>
+    <td>db_name </td>
+    <td>Database name, if it is not specified then it uses current database. </td>
+    <td>YES </td>
+  </tr>
+  <tr>
+    <td>table_name </td>
+    <td>The name of the table in provided database.</td>
+    <td>NO </td>
+  </tr>
   </tbody>
 </table><h3>Syntax</h3>
 <ul>
   <li><strong>Minor Compaction</strong></li>
-<p><code>
-ALTER TABLE table_name COMPACT &#39;MINOR&#39;;
+</ul><p><code>
+  ALTER TABLE table_name COMPACT &#39;MINOR&#39;;
 </code>
-</p>
-  <li><strong>Major Compaction</strong></li>
-<p><code>
-ALTER TABLE table_name COMPACT &#39;MAJOR&#39;;
+<ul>
+  <li><strong>Major Compaction</strong></li></p><p><code>
+  ALTER TABLE table_name COMPACT &#39;MAJOR&#39;;
 </code></p>
-</ul>
\ No newline at end of file
+</ul>
+ <h2 id="bucketing">BUCKETING</h2>
+<p>Bucketing feature can be used to distribute/organize the table/partition data into multiple files such that similar records are present in the same file. While creating a table, a user needs to specify the columns to be used for bucketing and the number of buckets. For the selction of bucket the Hash value of columns is used.</p><p>
+  <pre>
+  <code>CREATE TABLE [IF NOT EXISTS] [db_name.]table_name  [(col_name data_type, ...)]
+        STORED BY 'carbondata'  TBLPROPERTIES('BUCKETNUMBER'='noOfBuckets', 'BUCKETCOLUMNS'='columnname', 'TABLENAME'='tablename')
+  </code>
+</pre>
+    </p><p></p><h2>Parameter Description</h2>
+<table class="table table-striped table-bordered">
+  <thead>
+  <tr>
+    <th>Parameter </th>
+    <th>Description </th>
+    <th>Optional </th>
+  </tr>
+  </thead>
+  <tbody>
+  <tr>
+    <td>BUCKETNUMBER </td>
+    <td>Specifies the number of Buckets to be created. </td>
+    <td>No </td>
+  </tr>
+  <tr>
+    <td>BUCKETCOLUMNS </td>
+    <td>Specify the columns to be considered for Bucketing </td>
+    <td>No </td>
+  </tr>
+  <tr>
+    <td>TABLENAME </td>
+    <td>The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. </td>
+    <td>Yes </td>
+  </tr>
+  </tbody>
+</table><h2>Usage Guidelines</h2>
+<ul>
+  <li><p>The feature is supported for Spark 1.6.2 onwards, but the performance optimization is evident from Spark 2.1 onwards.</p></li>
+  <li><p>Bucketing can not be performed for columns of Complex Data Types.</p></li>
+  <li><p>Columns in the BUCKETCOLUMN parameter must be either a dimension or a measure but combination of both is not supported.</p></li>
+</ul><h2>Example :</h2>
+<pre>
+  <code>CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                                                                productNumber Int,
+                                                                productName String,
+                                                                storeCity String,
+                                                                storeProvince String,
+                                                                productCategory String,
+                                                                productBatch String,
+                                                                saleQuantity Int,
+                                                                revenue Int)
+                                                                STORED BY 'carbondata'
+                                                                TBLPROPERTIES ('COLUMN_GROUPS'='(productName,productCategory)',
+                                                                'DICTIONARY_EXCLUDE'='productName',
+                                                                'DICTIONARY_INCLUDE'='productNumber',
+                                                                'NO_INVERTED_INDEX'='productBatch',
+                                                                'BUCKETNUMBER'='4',
+                                                                'BUCKETCOLUMNS'='productNumber,saleQuantity',
+                                                                'TABLENAME'='productSalesTable')
+  </code>
+</pre>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/dml-operation-on-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/dml-operation-on-carbondata.html b/src/main/webapp/docs/latest/dml-operation-on-carbondata.html
index b876dae..a6a6d75 100644
--- a/src/main/webapp/docs/latest/dml-operation-on-carbondata.html
+++ b/src/main/webapp/docs/latest/dml-operation-on-carbondata.html
@@ -15,331 +15,432 @@
     KIND, either express or implied.  See the License for the
     specific language governing permissions and limitations
     under the License.
---><h1>DML Operations on CarbonData</h1><p>This tutorial guides you through the data manipulation language support provided by CarbonData.</p><h2>Overview</h2><p>The following DML operations are supported in CarbonData :</p>
+-->
+<h1>DML Operations on CarbonData</h1><p>This tutorial guides you through the data manipulation
+    language support provided by CarbonData.</p><h2>Overview</h2><p>The following DML operations are
+    supported in CarbonData :</p>
 <ul>
-  <li><a href="#load-data">LOAD DATA</a></li>
-  <li><a href="#insert-data-into-a-carbondata-table">INSERT DATA INTO A CARBONDATA TABLE</a></li>
-  <li><a href="#show-segments">SHOW SEGMENTS</a></li>
-  <li><a href="#delete-segment-by-id">DELETE SEGMENT BY ID</a></li>
-  <li><a href="#delete-segment-by-date">DELETE SEGMENT BY DATE</a></li>
-  <li><a href="#update-carbondata-table">UPDATE CARBONDATA TABLE</a></li>
-  <li><a href="#delete-records-from-carbondata-table">DELETE RECORDS FROM CARBONDATA TABLE</a></li>
-</ul><h2>LOAD DATA</h2><p>This command loads the user data in raw format to the CarbonData specific data format store, this allows CarbonData to provide good performance while querying the data. Please visit <a href="data-management.html">Data Management</a> for more details on LOAD.</p><h3>Syntax</h3><p><code>
-LOAD DATA [LOCAL] INPATH &#39;folder_path&#39; 
-INTO TABLE [db_name.]table_name 
-OPTIONS(property_name=property_value, ...)
-</code></p><p>OPTIONS are not mandatory for data loading process. Inside OPTIONS user can provide either of any options like DELIMITER, QUOTECHAR, ESCAPECHAR, MULTILINE as per requirement.</p><p>NOTE: The path shall be canonical path.</p><h3>Parameter Description</h3>
+    <li><a href="#load-data">LOAD DATA</a></li>
+    <li><a href="#insert-data">INSERT DATA INTO A CARBONDATA TABLE</a></li>
+    <li><a href="#show-segments">SHOW SEGMENTS</a></li>
+    <li><a href="#delete-id">DELETE SEGMENT BY ID</a></li>
+    <li><a href="#delete-date">DELETE SEGMENT BY DATE</a></li>
+    <li><a href="#update-carbondata">UPDATE CARBONDATA TABLE</a></li>
+    <li><a href="#delete-table">DELETE RECORDS FROM CARBONDATA TABLE</a></li>
+</ul><h2 id="load-data">LOAD DATA</h2><p>This command loads the user data in raw format to the
+    CarbonData specific data format store, this allows CarbonData to provide good performance while
+    querying the data. Please visit Data Management for more
+    details on LOAD.</p><h3>Syntax</h3><p><code>
+    LOAD DATA [LOCAL] INPATH &#39;folder_path&#39;
+    INTO TABLE [db_name.]table_name
+    OPTIONS(property_name=property_value, ...)
+</code></p><p>OPTIONS are not mandatory for data loading process. Inside OPTIONS user can provide
+    either of any options like DELIMITER, QUOTECHAR, ESCAPECHAR, MULTILINE as per requirement.</p>
+<p>NOTE: The path shall be canonical path.</p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
+    <thead>
     <tr>
-      <th>Parameter </th>
-      <th>Description </th>
-      <th>Optional </th>
+        <th>Parameter</th>
+        <th>Description</th>
+        <th>Optional</th>
     </tr>
-  </thead>
-  <tbody>
+    </thead>
+    <tbody>
     <tr>
-      <td>folder_path </td>
-      <td>Path of raw csv data folder or file. </td>
-      <td>NO </td>
+        <td>folder_path</td>
+        <td>Path of raw csv data folder or file.</td>
+        <td>NO</td>
     </tr>
     <tr>
-      <td>db_name </td>
-      <td>Database name, if it is not specified then it uses the current database. </td>
-      <td>YES </td>
+        <td>db_name</td>
+        <td>Database name, if it is not specified then it uses the current database.</td>
+        <td>YES</td>
     </tr>
     <tr>
-      <td>table_name </td>
-      <td>The name of the table in provided database. </td>
-      <td>NO </td>
+        <td>table_name</td>
+        <td>The name of the table in provided database.</td>
+        <td>NO</td>
     </tr>
     <tr>
-      <td>OPTIONS </td>
-      <td>Extra options provided to Load </td>
-      <td>YES </td>
+        <td>OPTIONS</td>
+        <td>Extra options provided to Load</td>
+        <td>YES</td>
     </tr>
-  </tbody>
+    </tbody>
 </table><h3>Usage Guidelines</h3><p>You can use the following options to load data:</p>
 <ul>
-  <li><p><strong>DELIMITER:</strong> Delimiters can be provided in the load command.</p><p><code>
-OPTIONS(&#39;DELIMITER&#39;=&#39;,&#39;)
-</code></p></li>
-  <li><p><strong>QUOTECHAR:</strong> Quote Characters can be provided in the load command.</p><p><code>
-OPTIONS(&#39;QUOTECHAR&#39;=&#39;&quot;&#39;)
-</code></p></li>
-  <li><p><strong>COMMENTCHAR:</strong> Comment Characters can be provided in the load command if user want to comment lines.</p><p><code>
-OPTIONS(&#39;COMMENTCHAR&#39;=&#39;#&#39;)
-</code></p></li>
-  <li><p><strong>FILEHEADER:</strong> Headers can be provided in the LOAD DATA command if headers are missing in the source files.</p><p><code>
-OPTIONS(&#39;FILEHEADER&#39;=&#39;column1,column2&#39;) 
-</code></p></li>
-  <li><p><strong>MULTILINE:</strong> CSV with new line character in quotes.</p><p><code>
-OPTIONS(&#39;MULTILINE&#39;=&#39;true&#39;) 
-</code></p></li>
-  <li><p><strong>ESCAPECHAR:</strong> Escape char can be provided if user want strict validation of escape character on CSV.</p><p><code>
-OPTIONS(&#39;ESCAPECHAR&#39;=&#39;\&#39;) 
-</code></p></li>
-  <li><p><strong>COMPLEX_DELIMITER_LEVEL_1:</strong> Split the complex type data column in a row (eg., a$b$c --&gt; Array = {a,b,c}).</p><p><code>
-OPTIONS(&#39;COMPLEX_DELIMITER_LEVEL_1&#39;=&#39;$&#39;) 
-</code></p></li>
-  <li><p><strong>COMPLEX_DELIMITER_LEVEL_2:</strong> Split the complex type nested data column in a row. Applies level_1 delimiter &amp; applies level_2 based on complex data type (eg., a:b$c:d --&gt; Array&gt; = {{a,b},{c,d}}).</p><p><code>
-OPTIONS(&#39;COMPLEX_DELIMITER_LEVEL_2&#39;=&#39;:&#39;) 
-</code></p></li>
-  <li><p><strong>ALL_DICTIONARY_PATH:</strong> All dictionary files path.</p><p><code>
-OPTIONS(&#39;ALL_DICTIONARY_PATH&#39;=&#39;/opt/alldictionary/data.dictionary&#39;)
-</code></p></li>
-  <li><p><strong>COLUMNDICT:</strong> Dictionary file path for specified column.</p><p><code>
-OPTIONS(&#39;COLUMNDICT&#39;=&#39;column1:dictionaryFilePath1,
-column2:dictionaryFilePath2&#39;)
-</code></p><p>NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can't be used together.</p></li>
-  <li><p><strong>DATEFORMAT:</strong> Date format for specified column.</p><p><code>
-OPTIONS(&#39;DATEFORMAT&#39;=&#39;column1:dateFormat1, column2:dateFormat2&#39;)
-</code></p><p>NOTE: Date formats are specified by date pattern strings. The date pattern letters in CarbonData are same as in JAVA. Refer to <a href="http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html">SimpleDateFormat</a>.</p></li>
-</ul><h3>Example:</h3><p><code>
-LOAD DATA local inpath &#39;/opt/rawdata/data.csv&#39; INTO table carbontable
-options(&#39;DELIMITER&#39;=&#39;,&#39;, &#39;QUOTECHAR&#39;=&#39;&quot;&#39;,&#39;COMMENTCHAR&#39;=&#39;#&#39;,
-&#39;FILEHEADER&#39;=&#39;empno,empname,designation,doj,workgroupcategory,
- workgroupcategoryname,deptno,deptname,projectcode,
- projectjoindate,projectenddate,attendance,utilization,salary&#39;,
-&#39;MULTILINE&#39;=&#39;true&#39;,&#39;ESCAPECHAR&#39;=&#39;\&#39;,&#39;COMPLEX_DELIMITER_LEVEL_1&#39;=&#39;$&#39;, 
-&#39;COMPLEX_DELIMITER_LEVEL_2&#39;=&#39;:&#39;,
-&#39;ALL_DICTIONARY_PATH&#39;=&#39;/opt/alldictionary/data.dictionary&#39;
-)
-</code></p><h2>INSERT DATA INTO A CARBONDATA TABLE</h2><p>This command inserts data into a CarbonData table. It is defined as a combination of two queries Insert and Select query respectively. It inserts records from a source table into a target CarbonData table. The source table can be a Hive table, Parquet table or a CarbonData table itself. It comes with the functionality to aggregate the records of a table by performing Select query on source table and load its corresponding resultant records into a CarbonData table.</p><p><strong>NOTE</strong> : The client node where the INSERT command is executing, must be part of the cluster.</p><h3>Syntax</h3><p><code>
-INSERT INTO TABLE &lt;CARBONDATA TABLE&gt; SELECT * FROM sourceTableName 
-[ WHERE { &lt;filter_condition&gt; } ];
+    <li><p><strong>DELIMITER:</strong> Delimiters can be provided in the load command.</p>
+        <p><code>
+            OPTIONS(&#39;DELIMITER&#39;=&#39;,&#39;)
+        </code></p></li>
+    <li><p><strong>QUOTECHAR:</strong> Quote Characters can be provided in the load command.</p>
+        <p><code>
+            OPTIONS(&#39;QUOTECHAR&#39;=&#39;&quot;&#39;)
+        </code></p></li>
+    <li><p><strong>COMMENTCHAR:</strong> Comment Characters can be provided in the load command if
+        user want to comment lines.</p>
+        <p><code>
+            OPTIONS(&#39;COMMENTCHAR&#39;=&#39;#&#39;)
+        </code></p></li>
+    <li><p><strong>FILEHEADER:</strong> Headers can be provided in the LOAD DATA command if headers
+        are missing in the source files.</p>
+        <p><code>
+            OPTIONS(&#39;FILEHEADER&#39;=&#39;column1,column2&#39;)
+        </code></p></li>
+    <li><p><strong>MULTILINE:</strong> CSV with new line character in quotes.</p>
+        <p><code>
+            OPTIONS(&#39;MULTILINE&#39;=&#39;true&#39;)
+        </code></p></li>
+    <li><p><strong>ESCAPECHAR:</strong> Escape char can be provided if user want strict validation
+        of escape character on CSV.</p>
+        <p><code>
+            OPTIONS(&#39;ESCAPECHAR&#39;=&#39;\&#39;)
+        </code></p></li>
+    <li><p><strong>COMPLEX_DELIMITER_LEVEL_1:</strong> Split the complex type data column in a row
+        (eg., a$b$c --&gt; Array = {a,b,c}).</p>
+        <p><code>
+            OPTIONS(&#39;COMPLEX_DELIMITER_LEVEL_1&#39;=&#39;$&#39;)
+        </code></p></li>
+    <li><p><strong>COMPLEX_DELIMITER_LEVEL_2:</strong> Split the complex type nested data column in
+        a row. Applies level_1 delimiter &amp; applies level_2 based on complex data type (eg.,
+        a:b$c:d --&gt; Array&gt; = {{a,b},{c,d}}).</p>
+        <p><code>
+            OPTIONS(&#39;COMPLEX_DELIMITER_LEVEL_2&#39;=&#39;:&#39;)
+        </code></p></li>
+    <li><p><strong>ALL_DICTIONARY_PATH:</strong> All dictionary files path.</p>
+        <p><code>
+            OPTIONS(&#39;ALL_DICTIONARY_PATH&#39;=&#39;/opt/alldictionary/data.dictionary&#39;)
+        </code></p></li>
+    <li><p><strong>COLUMNDICT:</strong> Dictionary file path for specified column.</p>
+        <p><code>
+            OPTIONS(&#39;COLUMNDICT&#39;=&#39;column1:dictionaryFilePath1,
+            column2:dictionaryFilePath2&#39;)
+        </code></p>
+        <p>NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can not be used together.</p></li>
+    <li><p><strong>DATEFORMAT:</strong> Date format for specified column.</p>
+        <p><code>
+            OPTIONS(&#39;DATEFORMAT&#39;=&#39;column1:dateFormat1, column2:dateFormat2&#39;)
+        </code></p>
+        <p>NOTE: Date formats are specified by date pattern strings. The date pattern letters in
+            CarbonData are same as in JAVA. Refer to <a
+                    href="http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html">SimpleDateFormat</a>.
+        </p></li>
+    <li><p><strong>USE_KETTLE:</strong> This option is used to specify whether to use kettle for
+        loading data or not. By default kettle is not used for data loading.</p>
+        <p><code>
+            OPTIONS(&#39;USE_KETTLE&#39;=&#39;FALSE&#39;)
+        </code></p></li>
+    <p>Note : It is recommended to set the value for this option as false.</p>
+    <li><p><strong>SINGLE_PASS:</strong> Single Pass Loading enables single job to finish data
+        loading with dictionary generation on the fly. It enhances performance in the scenarios
+        where the subsequent data loading after initial load involves fewer incremental updates on
+        the dictionary.</p>
+
+        <p>This option specifies whether to use single pass for loading data or not. By default this
+            option is set to FALSE.</p>
+        <p><code>
+            OPTIONS(&#39;SINGLE_PASS&#39;=&#39;TRUE&#39;)
+        </code></p>
+        <p>Note :</p>
+        <ul>
+            <li><p>If this option is set to TRUE then data loading will take less time.</p></li>
+            <li><p>If this option is set to some invalid value other than TRUE or FALSE then it uses
+                the default value.</p></li>
+        </ul>
+    </li>
+</ul>
+
+<h3>Example:</h3><p><pre><code>
+    LOAD DATA local inpath &#39;/opt/rawdata/data.csv&#39; INTO table carbontable
+    options(&#39;DELIMITER&#39;=&#39;,&#39;, &#39;QUOTECHAR&#39;=&#39;&quot;&#39;,&#39;COMMENTCHAR&#39;=&#39;#&#39;,
+    &#39;FILEHEADER&#39;=&#39;empno,empname,designation,doj,workgroupcategory,workgroupcategoryname,deptno,deptname,projectcode,
+    projectjoindate,projectenddate,attendance,utilization,salary&#39;,
+    &#39;MULTILINE&#39;=&#39;true&#39;,&#39;ESCAPECHAR&#39;=&#39;\&#39;,&#39;COMPLEX_DELIMITER_LEVEL_1&#39;=&#39;$&#39;,
+    &#39;COMPLEX_DELIMITER_LEVEL_2&#39;=&#39;:&#39;,
+    &#39;ALL_DICTIONARY_PATH&#39;=&#39;/opt/alldictionary/data.dictionary&#39;)
+</code></pre></p><h2 id="insert-data">INSERT DATA INTO A CARBONDATA TABLE</h2><p>This command inserts data
+    into a CarbonData table. It is defined as a combination of two queries Insert and Select query
+    respectively. It inserts records from a source table into a target CarbonData table. The source
+    table can be a Hive table, Parquet table or a CarbonData table itself. It comes with the
+    functionality to aggregate the records of a table by performing Select query on source table and
+    load its corresponding resultant records into a CarbonData table.</p><p><strong>NOTE</strong> :
+    The client node where the INSERT command is executing, must be part of the cluster.</p><h3>
+    Syntax</h3><p><code>
+    INSERT INTO TABLE &lt;CARBONDATA TABLE&gt; SELECT * FROM sourceTableName
+    [ WHERE { &lt;filter_condition&gt; } ];
 </code></p><p>You can also omit the <code>table</code> keyword and write your query as:</p><p><code>
-INSERT INTO &lt;CARBONDATA TABLE&gt; SELECT * FROM sourceTableName 
-[ WHERE { &lt;filter_condition&gt; } ];
+    INSERT INTO &lt;CARBONDATA TABLE&gt; SELECT * FROM sourceTableName
+    [ WHERE { &lt;filter_condition&gt; } ];
 </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
+    <thead>
     <tr>
-      <th>Parameter </th>
-      <th>Description </th>
+        <th>Parameter</th>
+        <th>Description</th>
     </tr>
-  </thead>
-  <tbody>
+    </thead>
+    <tbody>
     <tr>
-      <td>CARBON TABLE </td>
-      <td>The name of the Carbon table in which you want to perform the insert operation. </td>
+        <td>CARBON TABLE</td>
+        <td>The name of the Carbon table in which you want to perform the insert operation.</td>
     </tr>
     <tr>
-      <td>sourceTableName </td>
-      <td>The table from which the records are read and inserted into destination CarbonData table. </td>
+        <td>sourceTableName</td>
+        <td>The table from which the records are read and inserted into destination CarbonData
+            table.
+        </td>
     </tr>
-  </tbody>
-</table><h3>Usage Guidelines</h3><p>The following condition must be met for successful insert operation :</p>
+    </tbody>
+</table><h3>Usage Guidelines</h3><p>The following condition must be met for successful insert
+    operation :</p>
 <ul>
-  <li>The source table and the CarbonData table must have the same table schema.</li>
-  <li>The table must be created.</li>
-  <li>Overwrite is not supported for CarbonData table.</li>
-  <li>The data type of source and destination table columns should be same, else the data from source table will be treated as bad records and the INSERT command fails.</li>
-  <li>INSERT INTO command does not support partial success if bad records are found, it will fail.</li>
-  <li>Data cannot be loaded or updated in source table while insert from source table to target table is in progress.</li>
-</ul><p>To enable data load or update during insert operation, configure the following property to true.</p><p><code>
-carbon.insert.persist.enable=true
-</code></p><p>By default the above configuration will be false.</p><p><strong>NOTE</strong>: Enabling this property will reduce the performance.</p><h3>Examples</h3><p><code>
-INSERT INTO table1 SELECT item1 ,sum(item2 + 1000) as result FROM 
-table2 group by item1;
+    <li>The source table and the CarbonData table must have the same table schema.</li>
+    <li>The table must be created.</li>
+    <li>Overwrite is not supported for CarbonData table.</li>
+    <li>The data type of source and destination table columns should be same, else the data from
+        source table will be treated as bad records and the INSERT command fails.
+    </li>
+    <li>INSERT INTO command does not support partial success if bad records are found, it will
+        fail.
+    </li>
+    <li>Data cannot be loaded or updated in source table while insert from source table to target
+        table is in progress.
+    </li>
+</ul><p>To enable data load or update during insert operation, configure the following property to
+    true.</p><p><code>
+    carbon.insert.persist.enable=true
+</code></p><p>By default the above configuration will be false.</p><p><strong>NOTE</strong>:
+    Enabling this property will reduce the performance.</p><h3>Examples</h3><p><code>
+    INSERT INTO table1 SELECT item1 ,sum(item2 + 1000) as result FROM
+    table2 group by item1;
 </code></p><p><code>
-INSERT INTO table1 SELECT item1, item2, item3 FROM table2 
-where item2=&#39;xyz&#39;;
+    INSERT INTO table1 SELECT item1, item2, item3 FROM table2
+    where item2=&#39;xyz&#39;;
 </code></p><p><code>
-INSERT INTO table1 SELECT * FROM table2 
-where exists (select * from table3 
-where table2.item1 = table3.item1);
-</code></p><p><strong>The Status Success/Failure shall be captured in the driver log.</strong></p><h2>SHOW SEGMENTS</h2><p>This command is used to get the segments of CarbonData table.</p><p><code>
-SHOW SEGMENTS FOR TABLE [db_name.]table_name 
-LIMIT number_of_segments;
+    INSERT INTO table1 SELECT * FROM table2
+    where exists (select * from table3
+    where table2.item1 = table3.item1);
+</code></p><p><strong>The Status Success/Failure shall be captured in the driver log.</strong></p>
+<h2 id="show-segments">SHOW SEGMENTS</h2><p>This command is used to get the segments of CarbonData
+    table.</p><p><code>
+    SHOW SEGMENTS FOR TABLE [db_name.]table_name
+    LIMIT number_of_segments;
 </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
+    <thead>
     <tr>
-      <th>Parameter </th>
-      <th>Description </th>
-      <th>Optional </th>
+        <th>Parameter</th>
+        <th>Description</th>
+        <th>Optional</th>
     </tr>
-  </thead>
-  <tbody>
+    </thead>
+    <tbody>
     <tr>
-      <td>db_name </td>
-      <td>Database name, if it is not specified then it uses the current database. </td>
-      <td>YES </td>
+        <td>db_name</td>
+        <td>Database name, if it is not specified then it uses the current database.</td>
+        <td>YES</td>
     </tr>
     <tr>
-      <td>table_name </td>
-      <td>The name of the table in provided database. </td>
-      <td>NO </td>
+        <td>table_name</td>
+        <td>The name of the table in provided database.</td>
+        <td>NO</td>
     </tr>
     <tr>
-      <td>number_of_segments </td>
-      <td>Limit the output to this number. </td>
-      <td>YES </td>
+        <td>number_of_segments</td>
+        <td>Limit the output to this number.</td>
+        <td>YES</td>
     </tr>
-  </tbody>
+    </tbody>
 </table><h3>Example:</h3><p><code>
-SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4;
-</code></p><h2>DELETE SEGMENT BY ID</h2><p>This command is used to delete segment by using the segment ID. Each segment has a unique segment ID associated with it. Using this segment ID, you can remove the segment.</p><p>The following command will get the segmentID.</p><p><code>
-SHOW SEGMENTS FOR Table dbname.tablename LIMIT number_of_segments
-</code></p><p>After you retrieve the segment ID of the segment that you want to delete, execute the following command to delete the selected segment.</p><p><code>
-DELETE SEGMENT segment_sequence_id1, segments_sequence_id2, .... 
-FROM TABLE tableName
+    SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4;
+</code></p><h2 id="delete-id">DELETE SEGMENT BY ID</h2><p>This command is used to delete segment by
+    using the segment ID. Each segment has a unique segment ID associated with it. Using this
+    segment ID, you can remove the segment.</p><p>The following command will get the segmentID.</p>
+<p><code>
+    SHOW SEGMENTS FOR Table dbname.tablename LIMIT number_of_segments
+</code></p><p>After you retrieve the segment ID of the segment that you want to delete, execute the
+    following command to delete the selected segment.</p><p><code>
+    DELETE SEGMENT segment_sequence_id1, segments_sequence_id2, ....
+    FROM TABLE tableName
 </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
+    <thead>
     <tr>
-      <th>Parameter </th>
-      <th>Description </th>
-      <th>Optional </th>
+        <th>Parameter</th>
+        <th>Description</th>
+        <th>Optional</th>
     </tr>
-  </thead>
-  <tbody>
+    </thead>
+    <tbody>
     <tr>
-      <td>segment_id </td>
-      <td>Segment Id of the load. </td>
-      <td>NO </td>
+        <td>segment_id</td>
+        <td>Segment Id of the load.</td>
+        <td>NO</td>
     </tr>
     <tr>
-      <td>db_name </td>
-      <td>Database name, if it is not specified then it uses the current database. </td>
-      <td>YES </td>
+        <td>db_name</td>
+        <td>Database name, if it is not specified then it uses the current database.</td>
+        <td>YES</td>
     </tr>
     <tr>
-      <td>table_name </td>
-      <td>The name of the table in provided database. </td>
-      <td>NO </td>
+        <td>table_name</td>
+        <td>The name of the table in provided database.</td>
+        <td>NO</td>
     </tr>
-  </tbody>
+    </tbody>
 </table><h3>Example:</h3><p><code>
-DELETE SEGMENT 0 FROM TABLE CarbonDatabase.CarbonTable;
-DELETE SEGMENT 0.1,5,8 FROM TABLE CarbonDatabase.CarbonTable;
-</code>  NOTE: Here 0.1 is compacted segment sequence id. </p><h2>DELETE SEGMENT BY DATE</h2><p>This command will allow to delete the CarbonData segment(s) from the store based on the date provided by the user in the DML command. The segment created before the particular date will be removed from the specific stores.</p><p><code>
-DELETE FROM TABLE [schema_name.]table_name 
-WHERE[DATE_FIELD]BEFORE [DATE_VALUE]
+    DELETE SEGMENT 0 FROM TABLE CarbonDatabase.CarbonTable;
+    DELETE SEGMENT 0.1,5,8 FROM TABLE CarbonDatabase.CarbonTable;
+</code> NOTE: Here 0.1 is compacted segment sequence id. </p><h2 id="delete-date">DELETE SEGMENT BY
+    DATE</h2><p>This command will allow to delete the CarbonData segment(s) from the store based on
+    the date provided by the user in the DML command. The segment created before the particular date
+    will be removed from the specific stores.</p><p><code>
+    DELETE FROM TABLE [schema_name.]table_name
+    WHERE[DATE_FIELD]BEFORE [DATE_VALUE]
 </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
+    <thead>
     <tr>
-      <th>Parameter </th>
-      <th>Description </th>
-      <th>Optional </th>
+        <th>Parameter</th>
+        <th>Description</th>
+        <th>Optional</th>
     </tr>
-  </thead>
-  <tbody>
+    </thead>
+    <tbody>
     <tr>
-      <td>DATE_VALUE </td>
-      <td>Valid segment load start time value. All the segments before this specified date will be deleted. </td>
-      <td>NO </td>
+        <td>DATE_VALUE</td>
+        <td>Valid segment load start time value. All the segments before this specified date will be
+            deleted.
+        </td>
+        <td>NO</td>
     </tr>
     <tr>
-      <td>db_name </td>
-      <td>Database name, if it is not specified then it uses the current database. </td>
-      <td>YES </td>
+        <td>db_name</td>
+        <td>Database name, if it is not specified then it uses the current database.</td>
+        <td>YES</td>
     </tr>
     <tr>
-      <td>table_name </td>
-      <td>The name of the table in provided database. </td>
-      <td>NO </td>
+        <td>table_name</td>
+        <td>The name of the table in provided database.</td>
+        <td>NO</td>
     </tr>
-  </tbody>
+    </tbody>
 </table><h3>Example:</h3><p><code>
- DELETE SEGMENTS FROM TABLE CarbonDatabase.CarbonTable 
- WHERE STARTTIME BEFORE &#39;2017-06-01 12:05:06&#39;;  
-</code></p><h2>Update CarbonData Table</h2><p>This command will allow to update the carbon table based on the column expression and optional filter conditions.</p><h3>Syntax</h3><p><code>
- UPDATE &lt;table_name&gt;
- SET (column_name1, column_name2, ... column_name n) =
- (column1_expression , column2_expression . .. column n_expression )
- [ WHERE { &lt;filter_condition&gt; } ];
-</code></p><p>alternatively the following the command can also be used for updating the CarbonData Table :</p><p><code>
-UPDATE &lt;table_name&gt;
-SET (column_name1, column_name2,) =
-(select sourceColumn1, sourceColumn2 from sourceTable
-[ WHERE { &lt;filter_condition&gt; } ] )
-[ WHERE { &lt;filter_condition&gt; } ];
+    DELETE SEGMENTS FROM TABLE CarbonDatabase.CarbonTable
+    WHERE STARTTIME BEFORE &#39;2017-06-01 12:05:06&#39;;
+</code></p><h2 id="update-carbondata">Update CarbonData Table</h2><p>This command will allow to
+    update the carbon table based on the column expression and optional filter conditions.</p><h3>
+    Syntax</h3><p><code>
+    UPDATE &lt;table_name&gt;
+    SET (column_name1, column_name2, ... column_name n) =
+    (column1_expression , column2_expression . .. column n_expression )
+    [ WHERE { &lt;filter_condition&gt; } ];
+</code></p><p>alternatively the following the command can also be used for updating the CarbonData
+    Table :</p><p><code>
+    UPDATE &lt;table_name&gt;
+    SET (column_name1, column_name2,) =
+    (select sourceColumn1, sourceColumn2 from sourceTable
+    [ WHERE { &lt;filter_condition&gt; } ] )
+    [ WHERE { &lt;filter_condition&gt; } ];
 </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
+    <thead>
     <tr>
-      <th>Parameter </th>
-      <th>Description </th>
+        <th>Parameter</th>
+        <th>Description</th>
     </tr>
-  </thead>
-  <tbody>
+    </thead>
+    <tbody>
     <tr>
-      <td>table_name </td>
-      <td>The name of the Carbon table in which you want to perform the update operation. </td>
+        <td>table_name</td>
+        <td>The name of the Carbon table in which you want to perform the update operation.</td>
     </tr>
     <tr>
-      <td>column_name </td>
-      <td>The destination columns to be updated. </td>
+        <td>column_name</td>
+        <td>The destination columns to be updated.</td>
     </tr>
     <tr>
-      <td>sourceColumn </td>
-      <td>The source table column values to be updated in destination table. </td>
+        <td>sourceColumn</td>
+        <td>The source table column values to be updated in destination table.</td>
     </tr>
     <tr>
-      <td>sourceTable </td>
-      <td>The table from which the records are updated into destination Carbon table. </td>
+        <td>sourceTable</td>
+        <td>The table from which the records are updated into destination Carbon table.</td>
     </tr>
-  </tbody>
-</table><h3>Usage Guidelines</h3><p>The following conditions must be met for successful updation :</p>
+    </tbody>
+</table><h3>Usage Guidelines</h3><p>The following conditions must be met for successful updation
+    :</p>
 <ul>
-  <li>The update command fails if multiple input rows in source table are matched with single row in destination table.</li>
-  <li>If the source table generates empty records, the update operation will complete successfully without updating the table.</li>
-  <li>If a source table row does not correspond to any of the existing rows in a destination table, the update operation will complete successfully without updating the table.</li>
-  <li>In sub-query, if the source table and the target table are same, then the update operation fails.</li>
-  <li>If the sub-query used in UPDATE statement contains aggregate method or group by query, then the UPDATE operation fails.</li>
-</ul><h3>Examples</h3><p>Update is not supported for queries that contain aggregate or group by.</p><p><code>
- UPDATE t_carbn01 a
- SET (a.item_type_code, a.profit) = ( SELECT b.item_type_cd,
- sum(b.profit) from t_carbn01b b
- WHERE item_type_cd =2 group by item_type_code);
-</code></p><p>Here the Update Operation fails as the query contains aggregate function sum(b.profit) and group by clause in the sub-query.</p><p><code>
-UPDATE carbonTable1 d
-SET(d.column3,d.column5 ) = (SELECT s.c33 ,s.c55
-FROM sourceTable1 s WHERE d.column1 = s.c11)
-WHERE d.column1 = &#39;china&#39; EXISTS( SELECT * from table3 o where o.c2 &gt; 1);
+    <li>The update command fails if multiple input rows in source table are matched with single row
+        in destination table.
+    </li>
+    <li>If the source table generates empty records, the update operation will complete successfully
+        without updating the table.
+    </li>
+    <li>If a source table row does not correspond to any of the existing rows in a destination
+        table, the update operation will complete successfully without updating the table.
+    </li>
+    <li>In sub-query, if the source table and the target table are same, then the update operation
+        fails.
+    </li>
+    <li>If the sub-query used in UPDATE statement contains aggregate method or group by query, then
+        the UPDATE operation fails.
+    </li>
+</ul><h3>Examples</h3><p>Update is not supported for queries that contain aggregate or group by.</p>
+<p><code>
+    UPDATE t_carbn01 a
+    SET (a.item_type_code, a.profit) = ( SELECT b.item_type_cd,
+    sum(b.profit) from t_carbn01b b
+    WHERE item_type_cd =2 group by item_type_code);
+</code></p><p>Here the Update Operation fails as the query contains aggregate function sum(b.profit)
+    and group by clause in the sub-query.</p><p><code>
+    UPDATE carbonTable1 d
+    SET(d.column3,d.column5 ) = (SELECT s.c33 ,s.c55
+    FROM sourceTable1 s WHERE d.column1 = s.c11)
+    WHERE d.column1 = &#39;china&#39; EXISTS( SELECT * from table3 o where o.c2 &gt; 1);
 </code></p><p><code>
-UPDATE carbonTable1 d SET (c3) = (SELECT s.c33 from sourceTable1 s
-WHERE d.column1 = s.c11)
-WHERE exists( select * from iud.other o where o.c2 &gt; 1);
+    UPDATE carbonTable1 d SET (c3) = (SELECT s.c33 from sourceTable1 s
+    WHERE d.column1 = s.c11)
+    WHERE exists( select * from iud.other o where o.c2 &gt; 1);
 </code></p><p><code>
-UPDATE carbonTable1 SET (c2, c5 ) = (c2 + 1, concat(c5 , &quot;y&quot; ));
+    UPDATE carbonTable1 SET (c2, c5 ) = (c2 + 1, concat(c5 , &quot;y&quot; ));
 </code></p><p><code>
-UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, &quot;xyx&quot;)
-WHERE d.column1 = &#39;india&#39;;
+    UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, &quot;xyx&quot;)
+    WHERE d.column1 = &#39;india&#39;;
 </code></p><p><code>
-UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, &quot;xyx&quot;)
-WHERE d.column1 = &#39;india&#39;
-and EXISTS( SELECT * FROM table3 o WHERE o.column2 &gt; 1);
-</code></p><p><strong>The Status Success/Failure shall be captured in the driver log and the client.</strong></p><h2>Delete Records from CarbonData Table</h2><p>This command allows us to delete records from CarbonData table.</p><h3>Syntax</h3><p><code>
-DELETE FROM table_name [WHERE expression];
+    UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, &quot;xyx&quot;)
+    WHERE d.column1 = &#39;india&#39;
+    and EXISTS( SELECT * FROM table3 o WHERE o.column2 &gt; 1);
+</code></p><p><strong>The Status Success/Failure shall be captured in the driver log and the
+    client.</strong></p><h2 id="delete-table">Delete Records from CarbonData Table</h2><p>This
+    command allows us to delete records from CarbonData table.</p><h3>Syntax</h3><p><code>
+    DELETE FROM table_name [WHERE expression];
 </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
+    <thead>
     <tr>
-      <th>Parameter </th>
-      <th>Description </th>
+        <th>Parameter</th>
+        <th>Description</th>
     </tr>
-  </thead>
-  <tbody>
+    </thead>
+    <tbody>
     <tr>
-      <td>table_name </td>
-      <td>The name of the Carbon table in which you want to perform the delete. </td>
+        <td>table_name</td>
+        <td>The name of the Carbon table in which you want to perform the delete.</td>
     </tr>
-  </tbody>
+    </tbody>
 </table><h3>Examples</h3><p><code>
-DELETE FROM columncarbonTable1 d WHERE d.column1  = &#39;china&#39;;
+    DELETE FROM columncarbonTable1 d WHERE d.column1 = &#39;china&#39;;
 </code></p><p><code>
-DELETE FROM dest WHERE column1 IN (&#39;china&#39;, &#39;USA&#39;);
+    DELETE FROM dest WHERE column1 IN (&#39;china&#39;, &#39;USA&#39;);
 </code></p><p><code>
-DELETE FROM columncarbonTable1
-WHERE column1 IN (SELECT column11 FROM sourceTable2);
+    DELETE FROM columncarbonTable1
+    WHERE column1 IN (SELECT column11 FROM sourceTable2);
 </code></p><p><code>
-DELETE FROM columncarbonTable1
-WHERE column1 IN (SELECT column11 FROM sourceTable2 WHERE
-column1 = &#39;USA&#39;);
+    DELETE FROM columncarbonTable1
+    WHERE column1 IN (SELECT column11 FROM sourceTable2 WHERE
+    column1 = &#39;USA&#39;);
 </code></p><p><code>
-DELETE FROM columncarbonTable1 WHERE column2 &gt;= 4
-</code></p><p><strong>The Status Success/Failure shall be captured in the driver log and the client.</strong></p>
\ No newline at end of file
+    DELETE FROM columncarbonTable1 WHERE column2 &gt;= 4
+</code></p><p><strong>The Status Success/Failure shall be captured in the driver log and the
+    client.</strong></p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/faq.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/faq.html b/src/main/webapp/docs/latest/faq.html
index 0645fea..3a7a354 100644
--- a/src/main/webapp/docs/latest/faq.html
+++ b/src/main/webapp/docs/latest/faq.html
@@ -22,5 +22,6 @@
   <li><p><strong>Getting NotImplementedException for subquery using IN and EXISTS</strong></p><p>Subquery with in and exists not supported in CarbonData.</p></li>
   <li><p><strong>Getting Exceptions on creating a view</strong></p><p>View not supported in CarbonData.</p></li>
   <li><p><strong>How to verify if ColumnGroups have been created as desired.</strong></p><p>Try using desc table query.</p></li>
-  <li><p><strong>Did anyone try to run CarbonData on windows? Is it supported on Windows?</strong></p><p>We may provide support for windows in future. You are welcome to contribute if you want to add the support :) </p></li>
+  <li><p><strong>Did anyone try to run CarbonData on windows? Is it supported on Windows?</strong></p><p>We may provide support for windows in future. You are welcome to contribute if you want to add the support :)</p></li>
+  <li><p><strong>Can we execute Concurrent Operations(Load,Insert,Update) on table by multiple workers.</strong></p><p>Concurrency is not supported in current release of CarbonData.</p></li>
 </ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/file-structure-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/file-structure-of-carbondata.html b/src/main/webapp/docs/latest/file-structure-of-carbondata.html
new file mode 100644
index 0000000..19401b9
--- /dev/null
+++ b/src/main/webapp/docs/latest/file-structure-of-carbondata.html
@@ -0,0 +1,6 @@
+<h1>CarbonData File Structure</h1><p>CarbonData files contain groups of data called blocklets, along with all required information like schema, offsets and indices etc, in a file footer, co-located in HDFS.</p><p>The file footer can be read once to build the indices in memory, which can be utilized for optimizing the scans and processing for all subsequent queries.</p><p>Each blocklet in the file is further divided into chunks of data called data chunks. Each data chunk is organized either in columnar format or row format, and stores the data of either a single column or a set of columns. All blocklets in a file contain the same number and type of data chunks.</p><p><img src="../../../webapp/docs/latest/images/carbon_data_file_structure_new.png?raw=true" alt="CarbonData File Structure" /></p><p>Each data chunk contains multiple groups of data called as pages. There are three types of pages.</p>
+<ul>
+  <li>Data Page: Contains the encoded data of a column/group of columns.</li>
+  <li>Row ID Page (optional): Contains the row ID mappings used when the data page is stored as an inverted index.</li>
+  <li>RLE Page (optional): Contains additional metadata used when the data page is RLE coded.</li>
+</ul><p><img src="../../../webapp/docs/latest/images/carbon_data_format_new.png?raw=true" alt="CarbonData File Format" /></p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/images/CarbonData_logo.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/CarbonData_logo.png b/src/main/webapp/docs/latest/images/CarbonData_logo.png
old mode 100644
new mode 100755

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/images/carbon_data_file_structure_new.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_file_structure_new.png b/src/main/webapp/docs/latest/images/carbon_data_file_structure_new.png
old mode 100644
new mode 100755

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/images/carbon_data_format_new.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_format_new.png b/src/main/webapp/docs/latest/images/carbon_data_format_new.png
old mode 100644
new mode 100755

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/images/carbon_data_full_scan.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_full_scan.png b/src/main/webapp/docs/latest/images/carbon_data_full_scan.png
deleted file mode 100644
index 46715e7..0000000
Binary files a/src/main/webapp/docs/latest/images/carbon_data_full_scan.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/images/carbon_data_motivation.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_motivation.png b/src/main/webapp/docs/latest/images/carbon_data_motivation.png
deleted file mode 100644
index 6e454c6..0000000
Binary files a/src/main/webapp/docs/latest/images/carbon_data_motivation.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/images/carbon_data_olap_scan.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_olap_scan.png b/src/main/webapp/docs/latest/images/carbon_data_olap_scan.png
deleted file mode 100644
index c1dfb18..0000000
Binary files a/src/main/webapp/docs/latest/images/carbon_data_olap_scan.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/0d4cdb1c/src/main/webapp/docs/latest/images/carbon_data_random_scan.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_random_scan.png b/src/main/webapp/docs/latest/images/carbon_data_random_scan.png
deleted file mode 100644
index 7d44d34..0000000
Binary files a/src/main/webapp/docs/latest/images/carbon_data_random_scan.png and /dev/null differ