You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@carbondata.apache.org by ch...@apache.org on 2017/03/15 14:46:34 UTC

[2/3] incubator-carbondata-site git commit: Synchronized MD Files with Incubator CarbonData

Synchronized MD Files with Incubator CarbonData


Project: http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/commit/594dedec
Tree: http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/tree/594dedec
Diff: http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/diff/594dedec

Branch: refs/heads/asf-site
Commit: 594dedec9e1ff0c6429c5f4ffe4a7d7502471c19
Parents: 0839eb1
Author: PallaviSingh1992 <pa...@yahoo.co.in>
Authored: Tue Mar 14 17:58:18 2017 +0530
Committer: PallaviSingh1992 <pa...@yahoo.co.in>
Committed: Tue Mar 14 17:58:18 2017 +0530

----------------------------------------------------------------------
 .../latest/ddl-operation-on-carbondata.html     | 361 ++++++++++---------
 content/docs/latest/faq.html                    | 208 ++---------
 content/docs/latest/quick-start-guide.html      |  11 +-
 content/docs/latest/troubleshooting.html        |  88 +----
 content/pdf/maven-pdf-plugin.pdf                | Bin 164610 -> 160880 bytes
 .../latest/ddl-operation-on-carbondata.html     | 361 ++++++++++---------
 src/main/webapp/docs/latest/faq.html            | 208 ++---------
 .../webapp/docs/latest/quick-start-guide.html   |  13 +-
 .../webapp/docs/latest/troubleshooting.html     |  88 +----
 src/site/markdown/configuration-parameters.md   |  14 +-
 src/site/markdown/data-management.md            |   0
 .../markdown/ddl-operation-on-carbondata.md     | 110 +++---
 .../markdown/dml-operation-on-carbondata.md     |  18 +-
 src/site/markdown/faq.md                        |  96 ++---
 .../markdown/file-structure-of-carbondata.md    |  23 +-
 src/site/markdown/installation-guide.md         | 146 ++++----
 src/site/markdown/quick-start-guide.md          |  54 ++-
 .../supported-data-types-in-carbondata.md       |  19 +
 src/site/markdown/troubleshooting.md            |  99 +----
 19 files changed, 700 insertions(+), 1217 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/content/docs/latest/ddl-operation-on-carbondata.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/ddl-operation-on-carbondata.html b/content/docs/latest/ddl-operation-on-carbondata.html
index ece7d53..c9f238c 100644
--- a/content/docs/latest/ddl-operation-on-carbondata.html
+++ b/content/docs/latest/ddl-operation-on-carbondata.html
@@ -18,11 +18,11 @@
 -->
 <h1>DDL Operations on CarbonData</h1><p>This tutorial guides you through the data definition language support provided by CarbonData.</p><h2>Overview</h2><p>The following DDL operations are supported in CarbonData :</p>
 <ul>
-  <li><a href="#create-table">CREATE TABLE</a></li>
-  <li><a href="#show-table">SHOW TABLE</a></li>
-  <li><a href="#drop-table">DROP TABLE</a></li>
-  <li><a href="#compaction">COMPACTION</a></li>
-  <li><a href="#bucketing">BUCKETING</a></li>
+    <li><a href="#create-table">CREATE TABLE</a></li>
+    <li><a href="#show-table">SHOW TABLE</a></li>
+    <li><a href="#drop-table">DROP TABLE</a></li>
+    <li><a href="#compaction">COMPACTION</a></li>
+    <li><a href="#bucketing">BUCKETING</a></li>
 </ul><h2 id="create-table">CREATE TABLE</h2><p>This command can be used to create a CarbonData table by specifying the list of fields along with the table properties.</p><p><pre><code>
   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
   [(col_name data_type, ...)]
@@ -31,67 +31,67 @@
   // All Carbon&#39;s additional table options will go into properties
 </code></pre></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
-  <tr>
-    <th>Parameter </th>
-    <th>Description </th>
-    <th>Optional </th>
-  </tr>
-  </thead>
-  <tbody>
-  <tr>
-    <td>db_name </td>
-    <td>Name of the database. Database name should consist of alphanumeric characters and underscore(_) special character. </td>
-    <td>Yes </td>
-  </tr>
-  <tr>
-    <td>field_list </td>
-    <td>Comma separated List of fields with data type. The field names should consist of alphanumeric characters and underscore(_) special character. </td>
-    <td>No </td>
-  </tr>
-  <tr>
-    <td>table_name </td>
-    <td>The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. </td>
-    <td>No </td>
-  </tr>
-  <tr>
-    <td>STORED BY </td>
-    <td>"org.apache.carbondata.format", identifies and creates a CarbonData table. </td>
-    <td>No </td>
-  </tr>
-  <tr>
-    <td>TBLPROPERTIES </td>
-    <td>List of CarbonData table properties. </td>
-    <td> </td>
-  </tr>
-  </tbody>
+    <thead>
+    <tr>
+        <th>Parameter </th>
+        <th>Description </th>
+        <th>Optional </th>
+    </tr>
+    </thead>
+    <tbody>
+    <tr>
+        <td>db_name </td>
+        <td>Name of the database. Database name should consist of alphanumeric characters and underscore(_) special character. </td>
+        <td>Yes </td>
+    </tr>
+    <tr>
+        <td>field_list </td>
+        <td>Comma separated List of fields with data type. The field names should consist of alphanumeric characters and underscore(_) special character. </td>
+        <td>No </td>
+    </tr>
+    <tr>
+        <td>table_name </td>
+        <td>The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. </td>
+        <td>No </td>
+    </tr>
+    <tr>
+        <td>STORED BY </td>
+        <td>"org.apache.carbondata.format", identifies and creates a CarbonData table. </td>
+        <td>No </td>
+    </tr>
+    <tr>
+        <td>TBLPROPERTIES </td>
+        <td>List of CarbonData table properties. </td>
+        <td> </td>
+    </tr>
+    </tbody>
 </table><h3>Usage Guidelines</h3><p>Following are the guidelines for using table properties.</p>
 <ul>
-  <li><p><strong>Dictionary Encoding Configuration</strong></p><p>Dictionary encoding is enabled by default for all String columns, and disabled for non-String columns. You can include and exclude columns for dictionary encoding.</p></li>
+    <li><p><strong>Dictionary Encoding Configuration</strong></p><p>Dictionary encoding is enabled by default for all String columns, and disabled for non-String columns. You can include and exclude columns for dictionary encoding.</p></li>
 </ul><p><code>
-  TBLPROPERTIES (&quot;DICTIONARY_EXCLUDE&quot;=&quot;column1, column2&quot;)
-  TBLPROPERTIES (&quot;DICTIONARY_INCLUDE&quot;=&quot;column1, column2&quot;)
+    TBLPROPERTIES (&quot;DICTIONARY_EXCLUDE&quot;=&quot;column1, column2&quot;)
+    TBLPROPERTIES (&quot;DICTIONARY_INCLUDE&quot;=&quot;column1, column2&quot;)
 </code></p><p>Here, DICTIONARY_EXCLUDE will exclude dictionary creation. This is applicable for high-cardinality columns and is an optional parameter. DICTIONARY_INCLUDE will generate dictionary for the columns specified in the list.</p>
 <ul>
-  <li><p><strong>Row/Column Format Configuration</strong></p><p>Column groups with more than one column are stored in row format, instead of columnar format. By default, each column is a separate column group.</p></li>
+    <li><p><strong>Row/Column Format Configuration</strong></p><p>Column groups with more than one column are stored in row format, instead of columnar format. By default, each column is a separate column group.</p></li>
 </ul><p><code>
-  TBLPROPERTIES (&quot;COLUMN_GROUPS&quot;=&quot;(column1, column3),
-  (Column4,Column5,Column6)&quot;)
+    TBLPROPERTIES (&quot;COLUMN_GROUPS&quot;=&quot;(column1, column3),
+    (Column4,Column5,Column6)&quot;)
 </code></p>
 <ul>
-  <li><p><strong>Table Block Size Configuration</strong></p><p>The block size of table files can be defined using the property TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB and supports a range of 1 MB to 2048 MB.  If you do not specify this value in the DDL command, default value is used.</p></li>
+    <li><p><strong>Table Block Size Configuration</strong></p><p>The block size of table files can be defined using the property TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB and supports a range of 1 MB to 2048 MB.  If you do not specify this value in the DDL command, default value is used.</p></li>
 </ul><p><code>
-  TBLPROPERTIES (&quot;TABLE_BLOCKSIZE&quot;=&quot;512 MB&quot;)
+    TBLPROPERTIES (&quot;TABLE_BLOCKSIZE&quot;=&quot;512 MB&quot;)
 </code></p><p>Here 512 MB means the block size of this table is 512 MB, you can also set it as 512M or 512.</p>
 <ul>
-  <li><p><strong>Inverted Index Configuration</strong></p><p>Inverted index is very useful to improve compression ratio and query speed, especially for those low-cardinality columns who are in reward position.  By default inverted index is enabled. The user can disable the inverted index creation for some columns.</p></li>
+    <li><p><strong>Inverted Index Configuration</strong></p><p>Inverted index is very useful to improve compression ratio and query speed, especially for those low-cardinality columns who are in reward position.  By default inverted index is enabled. The user can disable the inverted index creation for some columns.</p></li>
 </ul><p><code>
-  TBLPROPERTIES (&quot;NO_INVERTED_INDEX&quot;=&quot;column1, column3&quot;)
+    TBLPROPERTIES (&quot;NO_INVERTED_INDEX&quot;=&quot;column1, column3&quot;)
 </code></p><p>No inverted index shall be generated for the columns specified in NO_INVERTED_INDEX. This property is applicable on columns with high-cardinality and is an optional parameter.</p><p>NOTE:</p>
 <ul>
-  <li><p>By default all columns other than numeric datatype are treated as dimensions and all columns of numeric datatype are treated as measures.</p></li>
-  <li><p>All dimensions except complex datatype columns are part of multi dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. If the user wants to keep any column (except columns of complex datatype) in multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE or DICTIONARY_INCLUDE.</p><h3>Example:</h3>
-    <p><pre><code>
+    <li><p>By default all columns other than numeric datatype are treated as dimensions and all columns of numeric datatype are treated as measures.</p></li>
+    <li><p>All dimensions except complex datatype columns are part of multi dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. If the user wants to keep any column (except columns of complex datatype) in multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE or DICTIONARY_INCLUDE.</p><h3>Example:</h3>
+        <p><pre><code>
     CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
                                                                 productNumber Int,
                                                                 productName String,
@@ -101,147 +101,156 @@
                                                                 productBatch String,
                                                                 saleQuantity Int,
                                                                 revenue Int)
-                                                                STORED BY &#39;carbondata&#39;
-                                                                TBLPROPERTIES (&#39;COLUMN_GROUPS&#39;=&#39;(productNumber,productName)&#39;,
-                                                                &#39;DICTIONARY_EXCLUDE&#39;=&#39;storeCity&#39;,
-                                                                &#39;DICTIONARY_INCLUDE&#39;=&#39;productNumber&#39;,
-                                                                &#39;NO_INVERTED_INDEX&#39;=&#39;productBatch&#39;)
-  </code></pre></p></li>
-</ul><h2 id="show-table">SHOW TABLE</h2><p>This command can be used to list all the tables in current database or all the tables of a specific database. <code>
-  SHOW TABLES [IN db_Name];
-</code></p><h3>Parameter Description</h3>
+    STORED BY &#39;carbondata&#39;
+    TBLPROPERTIES (&#39;COLUMN_GROUPS&#39;=&#39;(productNumber,productName)&#39;,
+    &#39;DICTIONARY_EXCLUDE&#39;=&#39;storeCity&#39;,
+    &#39;DICTIONARY_INCLUDE&#39;=&#39;productNumber&#39;,
+    &#39;NO_INVERTED_INDEX&#39;=&#39;productBatch&#39;)
+  </code></pre>
+        </p></li>
+</ul><h2 id="show-table">SHOW TABLE</h2><p>This command can be used to list all the tables in
+    current database or all the tables of a specific database. <code>
+        SHOW TABLES [IN db_Name];
+    </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
-  <tr>
-    <th>Parameter </th>
-    <th>Description </th>
-    <th>Optional </th>
-  </tr>
-  </thead>
-  <tbody>
-  <tr>
-    <td>IN db_Name </td>
-    <td>Name of the database. Required only if tables of this specific database are to be listed. </td>
-    <td>Yes </td>
-  </tr>
-  </tbody>
+    <thead>
+    <tr>
+        <th>Parameter</th>
+        <th>Description</th>
+        <th>Optional</th>
+    </tr>
+    </thead>
+    <tbody>
+    <tr>
+        <td>IN db_Name </td>
+        <td>Name of the database. Required only if tables of this specific database are to be listed. </td>
+        <td>Yes </td>
+    </tr>
+    </tbody>
 </table><h3>Example:</h3><p><code>
-  SHOW TABLES IN ProductSchema;
+    SHOW TABLES IN ProductSchema;
 </code></p><h2 id="drop-table">DROP TABLE</h2><p>This command is used to delete an existing table.</p><p><code>
-  DROP TABLE [IF EXISTS] [db_name.]table_name;
+    DROP TABLE [IF EXISTS] [db_name.]table_name;
 </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
-  <tr>
-    <th>Parameter </th>
-    <th>Description </th>
-    <th>Optional </th>
-  </tr>
-  </thead>
-  <tbody>
-  <tr>
-    <td>db_Name </td>
-    <td>Name of the database. If not specified, current database will be selected. </td>
-    <td>YES </td>
-  </tr>
-  <tr>
-    <td>table_name </td>
-    <td>Name of the table to be deleted. </td>
-    <td>NO </td>
-  </tr>
-  </tbody>
+    <thead>
+    <tr>
+        <th>Parameter </th>
+        <th>Description </th>
+        <th>Optional </th>
+    </tr>
+    </thead>
+    <tbody>
+    <tr>
+        <td>db_Name </td>
+        <td>Name of the database. If not specified, current database will be selected. </td>
+        <td>YES </td>
+    </tr>
+    <tr>
+        <td>table_name </td>
+        <td>Name of the table to be deleted. </td>
+        <td>NO </td>
+    </tr>
+    </tbody>
 </table><h3>Example:</h3><p><code>
-  DROP TABLE IF EXISTS productSchema.productSalesTable;
+    DROP TABLE IF EXISTS productSchema.productSalesTable;
 </code></p><h2 id="compaction">COMPACTION</h2><p>This command merges the specified number of segments into one segment. This enhances the query performance of the table.</p><p><code>
-  ALTER TABLE [db_name.]table_name COMPACT &#39;MINOR/MAJOR&#39;;
+    ALTER TABLE [db_name.]table_name COMPACT &#39;MINOR/MAJOR&#39;;
 </code></p><p>To get details about Compaction refer to Data Management</p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
-  <tr>
-    <th>Parameter </th>
-    <th>Description </th>
-    <th>Optional </th>
-  </tr>
-  </thead>
-  <tbody>
-  <tr>
-    <td>db_name </td>
-    <td>Database name, if it is not specified then it uses current database. </td>
-    <td>YES </td>
-  </tr>
-  <tr>
-    <td>table_name </td>
-    <td>The name of the table in provided database.</td>
-    <td>NO </td>
-  </tr>
-  </tbody>
+    <thead>
+    <tr>
+        <th>Parameter </th>
+        <th>Description </th>
+        <th>Optional </th>
+    </tr>
+    </thead>
+    <tbody>
+    <tr>
+        <td>db_name </td>
+        <td>Database name, if it is not specified then it uses current database. </td>
+        <td>YES </td>
+    </tr>
+    <tr>
+        <td>table_name </td>
+        <td>The name of the table in provided database.</td>
+        <td>NO </td>
+    </tr>
+    </tbody>
 </table><h3>Syntax</h3>
 <ul>
-  <li><strong>Minor Compaction</strong></li>
-</ul><p><code>
-  ALTER TABLE table_name COMPACT &#39;MINOR&#39;;
-</code>
+    <li><strong>Minor Compaction</strong></li>
+    <p>
+        <code>
+            ALTER TABLE table_name COMPACT &#39;MINOR&#39;;
+        </code>
+    </p>
+</ul>
 <ul>
-  <li><strong>Major Compaction</strong></li></p><p><code>
-  ALTER TABLE table_name COMPACT &#39;MAJOR&#39;;
+    <li><strong>Major Compaction</strong></li>
+    </p><p><code>
+    ALTER TABLE table_name COMPACT &#39;MAJOR&#39;;
 </code></p>
 </ul>
- <h2 id="bucketing">BUCKETING</h2>
-<p>Bucketing feature can be used to distribute/organize the table/partition data into multiple files such that similar records are present in the same file. While creating a table, a user needs to specify the columns to be used for bucketing and the number of buckets. For the selction of bucket the Hash value of columns is used.</p><p>
-  <pre>
-  <code>CREATE TABLE [IF NOT EXISTS] [db_name.]table_name  [(col_name data_type, ...)]
-        STORED BY 'carbondata'  TBLPROPERTIES('BUCKETNUMBER'='noOfBuckets', 'BUCKETCOLUMNS'='columnname', 'TABLENAME'='tablename')
+<h2 id="bucketing">BUCKETING</h2>
+<p>Bucketing feature can be used to distribute/organize the table/partition data into multiple files
+    such that similar records are present in the same file. While creating a table, a user needs to
+    specify the columns to be used for bucketing and the number of buckets. For the selction of
+    bucket the Hash value of columns is used.</p><p>
+<pre>
+  <code>
+    CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+               [(col_name data_type, ...)]
+    STORED BY 'carbondata'
+    TBLPROPERTIES('BUCKETNUMBER'='noOfBuckets',
+    'BUCKETCOLUMNS'='columnname')
   </code>
 </pre>
-    </p><p></p><h2>Parameter Description</h2>
+</p><p></p><h2>Parameter Description</h2>
 <table class="table table-striped table-bordered">
-  <thead>
-  <tr>
-    <th>Parameter </th>
-    <th>Description </th>
-    <th>Optional </th>
-  </tr>
-  </thead>
-  <tbody>
-  <tr>
-    <td>BUCKETNUMBER </td>
-    <td>Specifies the number of Buckets to be created. </td>
-    <td>No </td>
-  </tr>
-  <tr>
-    <td>BUCKETCOLUMNS </td>
-    <td>Specify the columns to be considered for Bucketing </td>
-    <td>No </td>
-  </tr>
-  <tr>
-    <td>TABLENAME </td>
-    <td>The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. </td>
-    <td>Yes </td>
-  </tr>
-  </tbody>
+    <thead>
+    <tr>
+        <th>Parameter</th>
+        <th>Description</th>
+        <th>Optional</th>
+    </tr>
+    </thead>
+    <tbody>
+    <tr>
+        <td>BUCKETNUMBER</td>
+        <td>Specifies the number of Buckets to be created.</td>
+        <td>No</td>
+    </tr>
+    <tr>
+        <td>BUCKETCOLUMNS</td>
+        <td>Specify the columns to be considered for Bucketing</td>
+        <td>No</td>
+    </tr>
+    </tbody>
 </table><h2>Usage Guidelines</h2>
 <ul>
-  <li><p>The feature is supported for Spark 1.6.2 onwards, but the performance optimization is evident from Spark 2.1 onwards.</p></li>
-  <li><p>Bucketing can not be performed for columns of Complex Data Types.</p></li>
-  <li><p>Columns in the BUCKETCOLUMN parameter must be either a dimension or a measure but combination of both is not supported.</p></li>
+    <li><p>The feature is supported for Spark 1.6.2 onwards, but the performance optimization is
+        evident from Spark 2.1 onwards.</p></li>
+    <li><p>Bucketing can not be performed for columns of Complex Data Types.</p></li>
+    <li><p>Columns in the BUCKETCOLUMN parameter must be only dimension. The BUCKETCOLUMN parameter
+        can not be a measure or a combination of measures and dimensions.</p></li>
 </ul><h2>Example :</h2>
-<pre>
-  <code>CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-                                                                productNumber Int,
-                                                                productName String,
-                                                                storeCity String,
-                                                                storeProvince String,
-                                                                productCategory String,
-                                                                productBatch String,
-                                                                saleQuantity Int,
-                                                                revenue Int)
-                                                                STORED BY 'carbondata'
-                                                                TBLPROPERTIES ('COLUMN_GROUPS'='(productName,productCategory)',
-                                                                'DICTIONARY_EXCLUDE'='productName',
-                                                                'DICTIONARY_INCLUDE'='productNumber',
-                                                                'NO_INVERTED_INDEX'='productBatch',
-                                                                'BUCKETNUMBER'='4',
-                                                                'BUCKETCOLUMNS'='productNumber,saleQuantity',
-                                                                'TABLENAME'='productSalesTable')
-  </code>
-</pre>
\ No newline at end of file
+<pre><code>
+  CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                                productNumber Int,
+                                productName String,
+                                storeCity String,
+                                storeProvince String,
+                                productCategory String,
+                                productBatch String,
+                                saleQuantity Int,
+                                revenue Int)
+   STORED BY 'carbondata'
+   TBLPROPERTIES ('COLUMN_GROUPS'='(productName,productNumber)',
+                  'DICTIONARY_EXCLUDE'='productName',
+                  'DICTIONARY_INCLUDE'='productNumber',
+                  'NO_INVERTED_INDEX'='productBatch',
+                  'BUCKETNUMBER'='4',
+                  'BUCKETCOLUMNS'='productName')
+
+ </code></pre>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/content/docs/latest/faq.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/faq.html b/content/docs/latest/faq.html
index d817823..6faf8f4 100644
--- a/content/docs/latest/faq.html
+++ b/content/docs/latest/faq.html
@@ -16,187 +16,51 @@
     specific language governing permissions and limitations
     under the License.
 -->
-<h1 id="faqs">FAQs</h1>
+<h1><a id="FAQs_0"></a>FAQs</h1>
 <ul>
-    <li>
-        <p><a href="#can-we-preserve-segments-from-compaction">Can we preserve Segments from
-            Compaction?</a></p>
-    </li>
-    <li>
-        <p><a href="#can-we-disable-horizontal-compaction">Can we disable horizontal compaction?</a></p>
-    </li>
-    <li>
-        <p><a href="#what-is-horizontal-compaction">What is horizontal compaction?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-enable-compaction-while-data-loading">How to enable Compaction while data
-            loading?</a></p>
-    </li>
-    <li>
-        <p><a href="#where-are-bad-records-stored-in-carbondata">Where are Bad Records Stored in
-            CarbonData?</a></p>
-    </li>
-    <li>
-        <p><a href="#what-are-bad-records">What are Bad Records?</a></p>
-    </li>
-    <li>
-        <p><a href="#can-we-use-carbondata-on-standalone-spark-cluster">Can we use CarbonData on
-            Standalone Spark Cluster?</a></p>
-    </li>
-    <li>
-        <p><a href="#what-versions-of-apache-spark-are-compatible-with-carbondata">What versions of
-            Apache Spark are Compatible with CarbonData?</a></p>
-    </li>
-    <li>
-        <p><a href="#can-we-load-data-from-excel">Can we Load Data from excel?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-enable-single-pass-data-loading">How to enable Single Pass Data Loading?</a>
-        </p>
-    </li>
-    <li>
-        <p><a href="#what-is-single-pass-data-loading">What is Single Pass Data Loading?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-specify-the-data-loading-format-for-carbondata">How to specify the data
-            loading format for CarbonData ?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-resolve-store-location-can-not-be-found">How to resolve store location can\u2019t
-            be found?</a></p>
-    </li>
-    <li>
-        <p><a href="">What is carbon.lock.type?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-enable-auto-compaction">How to enable Auto Compaction?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-resolve-abstract-method-error">How to resolve Abstract Method Error?</a></p>
-    </li>
-    <li>
-        <p><a href="#getting-exception-on-creating-a-view">Getting Exception on Creating a View</a></p>
-    </li>
-    <li>
-        <p><a href="#is-carbondata-supported-for-windows">Is CarbonData supported for Windows?</a></p>
-    </li>
-
+    <li><a href="#what-are-bad-records">What are Bad Records?</a></li>
+    <li><a href="#where-are-bad-records-stored-in-carbondata">Where are Bad Records Stored in CarbonData?</a></li>
+    <li><a href="#how-to-enable-bad-record-logging">How to enable Bad Record Logging?</a></li>
+    <li><a href="#how-to-ignore-the-bad-records">How to ignore the Bad Records?</a></li>
+    <li><a href="#how-to-specify-store-location-while-creating-carbon-session">How to specify store location while creating carbon session?</a></li>
+    <li><a href="#what-is-carbon-lock-type">What is Carbon Lock Type?</a></li>
+    <li><a href="#how-to-resolve-abstract-method-error">How to resolve Abstract Method Error?</a></li>
 </ul>
-
-<h2 id="can-we-preserve-segments-from-compaction">Can we preserve Segments from Compaction?</h2>
-<p>If you want to preserve number of segments from being compacted then you can set the property
-    <strong>carbon.numberof.preserve.segments</strong> equal to the <strong>value of number of
-        segments to be preserved</strong>.</p>
-<p>Note : <em>No segments are preserved by Default.</em></p>
-
-<h2 id="can-we-disable-horizontal-compaction">Can we disable horizontal compaction?</h2>
-<p>Yes, to disable horizontal compaction, set <strong>carbon.horizontal.compaction.enable</strong>
-    to <code>FALSE</code> in carbon.properties file.</p>
-
-<h2 id="what-is-horizontal-compaction">What is horizontal compaction?</h2>
-<p>Compaction performed after Update and Delete operations is referred as Horizontal Compaction.
-    After every DELETE and UPDATE operation, horizontal compaction may occur in case the delta
-    (DELETE/ UPDATE) files becomes more than specified threshold.</p>
-<p>By default the parameter <strong>carbon.horizontal.compaction.enable</strong> enabling the
-    horizontal compaction is set to <code>TRUE</code>.</p>
-
-<h2 id="how-to-enable-compaction-while-data-loading">How to enable Compaction while data
-    loading?</h2>
-<p>To enable compaction while data loading, set <strong>carbon.enable.auto.load.merge</strong> to
-    <code>TRUE</code> in carbon.properties file.</p>
-
-<h2 id="where-are-bad-records-stored-in-carbondata">Where are Bad Records Stored in CarbonData?</h2>
-<p>The bad records are stored at the location set in carbon.badRecords.location in carbon.properties
-    file.<br>
-    By default <strong>carbon.badRecords.location</strong> specifies the following location <code>/opt/Carbon/Spark/badrecords</code>.
-</p>
-
-<h2 id="what-are-bad-records">What are Bad Records?</h2>
-<p>Records that fail to get loaded into the CarbonData due to data type incompatibility are
-    classified as Bad Records.</p>
-
-<h2 id="can-we-use-carbondata-on-standalone-spark-cluster">Can we use CarbonData on Standalone Spark
-    Cluster?</h2>
-<p>Yes, CarbonData can be used on a Standalone spark cluster. But using a standalone cluster has
-    following limitations:</p>
+<h2 id="what-are-bad-records"><a id="What_are_Bad_Records_10"></a>What are Bad Records?</h2>
+<p>Records that fail to get loaded into the CarbonData due to data type incompatibility or are empty or have incompatible format are classified as Bad Records.</p>
+<h2 id="where-are-bad-records-stored-in-carbondata"><a id="Where_are_Bad_Records_Stored_in_CarbonData_13"></a>Where are Bad Records Stored in CarbonData?</h2>
+<p>The bad records are stored at the location set in carbon.badRecords.location in carbon.properties file.<br>
+    By default <strong>carbon.badRecords.location</strong> specifies the following location <code>/opt/Carbon/Spark/badrecords</code>.</p>
+<h2 id="how-to-enable-bad-record-logging"><a id="How_to_enable_Bad_Record_Logging_17"></a>How to enable Bad Record Logging?</h2>
+<p>While loading data we can specify the approach to handle Bad Records. In order to analyse the cause of the Bad Records the parameter <code>BAD_RECORDS_LOGGER_ENABLE</code> must be set to value <code>TRUE</code>. There are multiple approaches to handle Bad Records which can be specified  by the parameter <code>BAD_RECORDS_ACTION</code>.</p>
 <ul>
-    <li>single node cluster cannot be scaled up</li>
-    <li>the maximum memory and the CPU computation power has a fixed limit</li>
-    <li>the number of processors are limited in a single node cluster</li>
+    <li>To pad the incorrect values of the csv rows with NULL value and load the data in CarbonData, set the following in the query :</li>
 </ul>
-<p>To harness the actual speed of execution of CarbonData on petabytes of data, it is suggested to
-    use a Multinode Cluster.</p>
-
-<h2 id="what-versions-of-apache-spark-are-compatible-with-carbondata">What versions of Apache Spark
-    are Compatible with CarbonData?</h2>
-<p>Currently <strong>Spark 1.6.2</strong> and <strong>Spark 2.1</strong> is compatible with
-    CarbonData.</p>
-
-<h2 id="can-we-load-data-from-excel">Can we Load Data from excel?</h2>
-<p>Yes, the data can be loaded from excel provided the data is in CSV format.</p>
-
-<h2 id="how-to-enable-single-pass-data-loading">How to enable Single Pass Data Loading?</h2>
-<p>You need to set <strong>SINGLE_PASS</strong> to <code>True</code> and append it to
-    <code>OPTIONS</code> Section in the query as demonstrated in the Load Query below :</p>
-<pre><code>LOAD DATA local inpath '/opt/rawdata/data.csv' INTO table carbontable
-OPTIONS('DELIMITER'=',', 'QUOTECHAR'='&quot;','FILEHEADER'='empno,empname,designation','USE_KETTLE'='FALSE')
+<pre><code>'BAD_RECORDS_ACTION'='FORCE'
 </code></pre>
-<p>Refer to <a
-        href="https://github.com/PallaviSingh1992/incubator-carbondata/blob/6b4dd5f3dea8c93839a94c2d2c80ab7a799cf209/docs/dml-operation-on-carbondata.md">DML-operations-in-CarbonData</a>
-    for more details and example.</p>
-
-<h2 id="what-is-single-pass-data-loading">What is Single Pass Data Loading?</h2>
-<p>Single Pass Loading enables single job to finish data loading with dictionary generation on the
-    fly. It enhances performance in the scenarios where the subsequent data loading after initial
-    load involves fewer incremental updates on the dictionary.<br>
-    This option specifies whether to use single pass for loading data or not. By default this option
-    is set to <code>FALSE</code>.</p>
-
-<h2 id="how-to-specify-the-data-loading-format-for-carbondata">How to specify the data loading
-    format for CarbonData?</h2>
-<p>Edit carbon.properties file. Modify the value of parameter
-    <strong>carbon.data.file.version</strong>.<br>
-    Setting the parameter <strong>carbon.data.file.version</strong> to <code>1</code> will support
-    data loading in <code>old format(0.x version)</code> and setting <strong>carbon.data.file.version</strong>
-    to <code>2</code> will support data loading in <code>new format(1.x onwards)</code> only.<br>
-    By default the data loading is supported using the new format.</p>
-
-<h2 id="how-to-resolve-store-location-can-not-be-found">How to resolve store location can not be
-    found?</h2>
-<p>Try creating <code>carbonsession</code> with <code>storepath</code> specified in the following
-    manner :</p>
+<ul>
+    <li>To write the Bad Records without padding incorrect values with NULL in the raw csv (set in the parameter <strong>carbon.badRecords.location</strong>), set the following in the query :</li>
+</ul>
+<pre><code>'BAD_RECORDS_ACTION'='REDIRECT'
+</code></pre>
+<h2 id="how-to-ignore-the-bad-records"><a id="How_to_ignore_the_Bad_Records_30"></a>How to ignore the Bad Records?</h2>
+<p>To ignore the Bad Records from getting stored in the raw csv, we need to set the following in the query :</p>
+<pre><code>'BAD_RECORDS_ACTION'='IGNORE'
+</code></pre>
+<h2 id="how-to-specify-store-location-while-creating-carbon-session"><a id="How_to_specify_store_location_while_creating_carbon_session_36"></a>How to specify store location while creating carbon session?</h2>
+<p>The store location specified while creating carbon session is used by the CarbonData to store the meta data like the schema, dictionary files, dictionary meta data and sort indexes.</p>
+<p>Try creating <code>carbonsession</code> with <code>storepath</code> specified in the following manner :</p>
 <pre><code>val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&lt;store_path&gt;)
 </code></pre>
 <p>Example:</p>
 <pre><code>val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&quot;hdfs://localhost:9000/carbon/store &quot;)
 </code></pre>
-
-<h2 id="what-is-carbon-lockt-ype">What is carbon.lock.type?</h2>
-<p>This property configuration specifies the type of lock to be acquired during concurrent
-    operations on table. This property can be set with the following values :</p>
+<h2 id="what-is-carbon-lock-type"><a id="What_is_Carbon_Lock_Type_48"></a>What is Carbon Lock Type?</h2>
+<p>The Apache CarbonData acquires lock on the files to prevent concurrent operation from modifying the same files. The lock can be of the following types depending on the storage location, for HDFS we specify it to be of type HDFSLOCK. By default it is set to type LOCALLOCK.<br>
+    The property carbon.lock.type configuration specifies the type of lock to be acquired during concurrent operations on table. This property can be set with the following values :</p>
 <ul>
-    <li><strong>LOCALLOCK</strong> : This Lock is created on local file system as file. This lock is
-        useful when only one spark driver (thrift server) runs on a machine and no other CarbonData
-        spark application is launched concurrently.
-    </li>
-    <li><strong>HDFSLOCK</strong> : This Lock is created on HDFS file system as file. This lock is
-        useful when multiple CarbonData spark applications are launched and no ZooKeeper is running
-        on cluster and the HDFS supports, file based locking.
-    </li>
+    <li><strong>LOCALLOCK</strong> : This Lock is created on local file system as file. This lock is useful when only one spark driver (thrift server) runs on a machine and no other CarbonData spark application is launched concurrently.</li>
+    <li><strong>HDFSLOCK</strong> : This Lock is created on HDFS file system as file. This lock is useful when multiple CarbonData spark applications are launched and no ZooKeeper is running on cluster and the HDFS supports, file based locking.</li>
 </ul>
-
-<h2 id="how-to-enable-auto-compaction">How to enable Auto Compaction?</h2>
-<p>To enable compaction set <strong>carbon.enable.auto.load.merge</strong> to <code>TRUE</code> in
-    the carbon.properties file.</p>
-
-<h2 id="how-to-resolve-abstract-method-error">How to resolve Abstract Method Error?</h2>
-<p>You need to specify the <code>spark version</code> while using Maven to build project.</p>
-
-<h2 id="getting-exception-on-creating-a-view">Getting Exception on Creating a View</h2>
-<p>View not supported in CarbonData.</p>
-
-<h2 id="is-carbondata-supported-for-windows">Is CarbonData supported for Windows?</h2>
-<p>We may provide support for windows in future. You are welcome to contribute if you want to add
-    the support :)</p>
-
-</body></html>
+<h2 id="how-to-resolve-abstract-method-error"><a id="How_to_resolve_Abstract_Method_Error_54"></a>How to resolve Abstract Method Error?</h2>
+<p>In order to build CarbonData project it is necessary to specify the spark profile. The spark profile sets the Spark Version. You need to specify the <code>spark version</code> while using Maven to build project.</p>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/content/docs/latest/quick-start-guide.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/quick-start-guide.html b/content/docs/latest/quick-start-guide.html
index 6f6a864..a2688c2 100644
--- a/content/docs/latest/quick-start-guide.html
+++ b/content/docs/latest/quick-start-guide.html
@@ -49,9 +49,14 @@
 </code></p>
 <ul>
     <li>Create a CarbonSession :</li>
-</ul><p><code>
-    val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&quot;&lt;hdfs store path&gt;&quot;)
-</code></p><h4>Executing Queries</h4><h5>Creating a Table</h5><p><code>
+</ul><pre><code>val carbon = SparkSession
+            .builder()
+            .config(sc.getConf)
+            .getOrCreateCarbonSession(&quot;&lt;hdfs store path&gt;&quot;)
+</code></pre>
+<p>NOTE: By default metastore location is pointed to \u201c\u2026/carbon.metastore\u201d, user can provide own metastore location to CarbonSession like</p>
+<pre><code>`SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&quot;&lt;hdfs store path&gt;&quot;, &quot;&lt;local metastore path&gt;&quot;)`
+</code></pre><h4>Executing Queries</h4><h5>Creating a Table</h5><p><code>
     scala&gt;carbon.sql(&quot;CREATE TABLE IF NOT EXISTS test_table(id string, name string, city string, age Int) STORED BY &#39;carbondata&#39;&quot;)
 </code></p><h5>Loading Data to a Table</h5><p><code>
     scala&gt;carbon.sql(&quot;LOAD DATA INPATH &#39;sample.csv file path&#39; INTO TABLE test_table&quot;)

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/content/docs/latest/troubleshooting.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/troubleshooting.html b/content/docs/latest/troubleshooting.html
index 7be27da..4c728c6 100644
--- a/content/docs/latest/troubleshooting.html
+++ b/content/docs/latest/troubleshooting.html
@@ -18,28 +18,6 @@
 --><h1>Troubleshooting</h1>
 <p>This tutorial is designed to provide troubleshooting for end users and developers
     who are building, deploying, and using CarbonData.</p>
-<ul>
-    <li><a href="#failed-to-load-thrift-libraries">Failed to load thrift libraries</a></li>
-    <li><a href="#failed-to-launch-the-spark-shell">Failed to launch the Spark Shell</a></li>
-    <li><a href="#query-failure-with-generic-error-on-the-beeline">Query Failure with Generic Error
-        on the Beeline</a></li>
-    <li><a href="#failed-to-execute-load-query-on-cluster">Failed to execute load query on
-        cluster</a></li>
-    <li><a href="#failed-to-execute-insert-query-on-cluster">Failed to execute insert query on
-        cluster</a></li>
-    <li><a href="#failed-to-connect-to-hiveuser-with-thrift">Failed to connect to hiveuser with
-        thrift</a></li>
-    <li><a href="#failure-to-read-the-metastore-db-during-table-creation">Failure to read the
-        metastore db during table creation</a></li>
-    <li><a href="#failed-to-load-data-on-the-cluster">Failed to load data on the cluster</a></li>
-    <li><a href="#failed-to-insert-data-on-the-cluster">Failed to insert data on the cluster</a>
-    </li>
-    <li><a href="#failed-to-execute-concurrent-operations">Failed to execute Concurrent
-        Operations</a></li>
-    <li><a href="#failed-to-create-a-table-with-a-single-numeric-column">Failed to create a table
-        with a single numeric column</a></li>
-    <li><a href="#data-failure-because-of-bad-records">Data Failure because of Bad Records</a></li>
-</ul>
 <h2 id="failed-to-load-thrift-libraries">Failed to load thrift libraries</h2>
 <p><strong>Symptom</strong></p>
 <p>Thrift throws following exception :</p>
@@ -70,8 +48,7 @@ libthriftc.so.0: cannot open shared object file: No such file or directory
 </code></pre>
     </li>
 </ol>
-<pre><code>Note : Remember to add only the path to the directory, not the full path for that file, all the libraries inside that path will be automatically indexed.
-</code></pre>
+Note : Remember to add only the path to the directory, not the full path for that file, all the libraries inside that path will be automatically indexed.
 <h2 id="failed-to-launch-the-spark-shell">Failed to launch the Spark Shell</h2>
 <p><strong>Symptom</strong></p>
 <p>The shell prompts the following error :</p>
@@ -90,56 +67,11 @@ $OverrideCatalog$$overrides_$e
         <p>Use the following command :</p>
     </li>
 </ol>
-<pre><code>```
+<pre><code>
  &quot;mvn -Pspark-2.1 -Dspark.version {yourSparkVersion} clean package&quot;
-```
 
-Note :  Refrain from using &quot;mvn clean package&quot; without specifying the profile.
 </code></pre>
-<h2 id="query-failure-with-generic-error-on-the-beeline">Query Failure with Generic Error
-    on the Beeline</h2>
-<p><strong>Symptom</strong></p>
-<p>Query fails on the executor side and generic error message is printed on the beeline console</p>
-<p><img src="../../../content/docs/latest/images/query_failure_beeline.png?raw=true"
-        alt="Query Failure Beeline"></p>
-<p><strong>Possible Causes</strong></p>
-<ul>
-    <li>In Query flow, Table B-Tree will be loaded into memory on the driver side and filter
-        condition is validated against the min-max of each block to identify false positive,<br>
-        Once the blocks are selected, based on number of available executors, blocks will be
-        distributed to each executor node as shown in below driver logs snapshot
-    </li>
-</ul>
-<p><img src="../../../content/docs/latest/images/query_failure_logs.png?raw=true"
-        alt="Query Failure Logs"></p>
-<ul>
-    <li>
-        <p>When the error occurs in driver side while b-tree loading or block distribution, detail
-            error message will be printed on the beeline console and error trace will be printed on
-            the driver logs.</p>
-    </li>
-    <li>
-        <p>When the error occurs in the executor side, generic error message will be printed as
-            shown in issue description.</p>
-    </li>
-</ul>
-<p><img src="../../../content/docs/latest/images/query_failure_job_details.png?raw=true"
-        alt="Query Failure Job Details"></p>
-<ul>
-    <li>Details of the failed stages can be seen in the Spark Application UI by clicking on the
-        failed stages on the failed job as shown in previous snapshot
-    </li>
-</ul>
-<p><img src="../../../content/docs/latest/images/query_failure_spark_ui.png?raw=true"
-        alt="Query Failure Spark UI"></p>
-<p><strong>Procedure</strong></p>
-<p>Details of the error can be analyzed in details using executor logs available in stdout</p>
-<p><img src="../../../content/docs/latest/images/query_failure_procedure.png?raw=true"
-        alt="Query Failure Spark UI"></p>
-<p>Below snapshot shows executor logs with error message for query failure which can be helpful to
-    locate the error</p>
-<p><img src="../../../content/docs/latest/images/query_failure_issue.png?raw=true"
-        alt="Query Failure Spark UI"></p>
+Note :  Refrain from using &quot;mvn clean package&quot; without specifying the profile.
 <h2 id="failed-to-execute-load-query-on-cluster">Failed to execute load query on cluster.
 </h2>
 <p><strong>Symptom</strong></p>
@@ -282,16 +214,4 @@ Note :  Refrain from using &quot;mvn clean package&quot; without specifying the
 <p>Behavior not supported.</p>
 <p><strong>Procedure</strong></p>
 <p>A single column that can be considered as dimension is mandatory for table creation.</p>
-<h2 id="data-failure-because-of-bad-records">Data Failure because of Bad Records</h2>
-<p><strong>Symptom</strong></p>
-<p>Data Loading fails with the following exception</p>
-<pre><code>Error: java.lang.Exception: Data load failed due to Bad record
-</code></pre>
-<p><strong>Possible Causes</strong></p>
-<p>The parameter BAD_RECORDS_ACTION has not been specified in the Query.</p>
-<p><strong>Procedure</strong></p>
-<p>Set the following parameter in the load command OPTIONS as shown below :</p>
-<p>\u2018BAD_RECORDS_ACTION\u2019='FORCE\u2018</p>
-<p><em>Example :</em></p>
-<pre><code>LOAD DATA INPATH 'hdfs://hacluster/user/loader/moredata01.csv' INTO TABLE flow_carbon_256b OPTIONS('DELIMITER'=',', 'BAD_RECORDS_ACTION'='FORCE');
-</code></pre>
+

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/content/pdf/maven-pdf-plugin.pdf
----------------------------------------------------------------------
diff --git a/content/pdf/maven-pdf-plugin.pdf b/content/pdf/maven-pdf-plugin.pdf
index 2dd9814..69f5cd8 100644
Binary files a/content/pdf/maven-pdf-plugin.pdf and b/content/pdf/maven-pdf-plugin.pdf differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/main/webapp/docs/latest/ddl-operation-on-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/ddl-operation-on-carbondata.html b/src/main/webapp/docs/latest/ddl-operation-on-carbondata.html
index ece7d53..c9f238c 100644
--- a/src/main/webapp/docs/latest/ddl-operation-on-carbondata.html
+++ b/src/main/webapp/docs/latest/ddl-operation-on-carbondata.html
@@ -18,11 +18,11 @@
 -->
 <h1>DDL Operations on CarbonData</h1><p>This tutorial guides you through the data definition language support provided by CarbonData.</p><h2>Overview</h2><p>The following DDL operations are supported in CarbonData :</p>
 <ul>
-  <li><a href="#create-table">CREATE TABLE</a></li>
-  <li><a href="#show-table">SHOW TABLE</a></li>
-  <li><a href="#drop-table">DROP TABLE</a></li>
-  <li><a href="#compaction">COMPACTION</a></li>
-  <li><a href="#bucketing">BUCKETING</a></li>
+    <li><a href="#create-table">CREATE TABLE</a></li>
+    <li><a href="#show-table">SHOW TABLE</a></li>
+    <li><a href="#drop-table">DROP TABLE</a></li>
+    <li><a href="#compaction">COMPACTION</a></li>
+    <li><a href="#bucketing">BUCKETING</a></li>
 </ul><h2 id="create-table">CREATE TABLE</h2><p>This command can be used to create a CarbonData table by specifying the list of fields along with the table properties.</p><p><pre><code>
   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
   [(col_name data_type, ...)]
@@ -31,67 +31,67 @@
   // All Carbon&#39;s additional table options will go into properties
 </code></pre></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
-  <tr>
-    <th>Parameter </th>
-    <th>Description </th>
-    <th>Optional </th>
-  </tr>
-  </thead>
-  <tbody>
-  <tr>
-    <td>db_name </td>
-    <td>Name of the database. Database name should consist of alphanumeric characters and underscore(_) special character. </td>
-    <td>Yes </td>
-  </tr>
-  <tr>
-    <td>field_list </td>
-    <td>Comma separated List of fields with data type. The field names should consist of alphanumeric characters and underscore(_) special character. </td>
-    <td>No </td>
-  </tr>
-  <tr>
-    <td>table_name </td>
-    <td>The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. </td>
-    <td>No </td>
-  </tr>
-  <tr>
-    <td>STORED BY </td>
-    <td>"org.apache.carbondata.format", identifies and creates a CarbonData table. </td>
-    <td>No </td>
-  </tr>
-  <tr>
-    <td>TBLPROPERTIES </td>
-    <td>List of CarbonData table properties. </td>
-    <td> </td>
-  </tr>
-  </tbody>
+    <thead>
+    <tr>
+        <th>Parameter </th>
+        <th>Description </th>
+        <th>Optional </th>
+    </tr>
+    </thead>
+    <tbody>
+    <tr>
+        <td>db_name </td>
+        <td>Name of the database. Database name should consist of alphanumeric characters and underscore(_) special character. </td>
+        <td>Yes </td>
+    </tr>
+    <tr>
+        <td>field_list </td>
+        <td>Comma separated List of fields with data type. The field names should consist of alphanumeric characters and underscore(_) special character. </td>
+        <td>No </td>
+    </tr>
+    <tr>
+        <td>table_name </td>
+        <td>The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. </td>
+        <td>No </td>
+    </tr>
+    <tr>
+        <td>STORED BY </td>
+        <td>"org.apache.carbondata.format", identifies and creates a CarbonData table. </td>
+        <td>No </td>
+    </tr>
+    <tr>
+        <td>TBLPROPERTIES </td>
+        <td>List of CarbonData table properties. </td>
+        <td> </td>
+    </tr>
+    </tbody>
 </table><h3>Usage Guidelines</h3><p>Following are the guidelines for using table properties.</p>
 <ul>
-  <li><p><strong>Dictionary Encoding Configuration</strong></p><p>Dictionary encoding is enabled by default for all String columns, and disabled for non-String columns. You can include and exclude columns for dictionary encoding.</p></li>
+    <li><p><strong>Dictionary Encoding Configuration</strong></p><p>Dictionary encoding is enabled by default for all String columns, and disabled for non-String columns. You can include and exclude columns for dictionary encoding.</p></li>
 </ul><p><code>
-  TBLPROPERTIES (&quot;DICTIONARY_EXCLUDE&quot;=&quot;column1, column2&quot;)
-  TBLPROPERTIES (&quot;DICTIONARY_INCLUDE&quot;=&quot;column1, column2&quot;)
+    TBLPROPERTIES (&quot;DICTIONARY_EXCLUDE&quot;=&quot;column1, column2&quot;)
+    TBLPROPERTIES (&quot;DICTIONARY_INCLUDE&quot;=&quot;column1, column2&quot;)
 </code></p><p>Here, DICTIONARY_EXCLUDE will exclude dictionary creation. This is applicable for high-cardinality columns and is an optional parameter. DICTIONARY_INCLUDE will generate dictionary for the columns specified in the list.</p>
 <ul>
-  <li><p><strong>Row/Column Format Configuration</strong></p><p>Column groups with more than one column are stored in row format, instead of columnar format. By default, each column is a separate column group.</p></li>
+    <li><p><strong>Row/Column Format Configuration</strong></p><p>Column groups with more than one column are stored in row format, instead of columnar format. By default, each column is a separate column group.</p></li>
 </ul><p><code>
-  TBLPROPERTIES (&quot;COLUMN_GROUPS&quot;=&quot;(column1, column3),
-  (Column4,Column5,Column6)&quot;)
+    TBLPROPERTIES (&quot;COLUMN_GROUPS&quot;=&quot;(column1, column3),
+    (Column4,Column5,Column6)&quot;)
 </code></p>
 <ul>
-  <li><p><strong>Table Block Size Configuration</strong></p><p>The block size of table files can be defined using the property TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB and supports a range of 1 MB to 2048 MB.  If you do not specify this value in the DDL command, default value is used.</p></li>
+    <li><p><strong>Table Block Size Configuration</strong></p><p>The block size of table files can be defined using the property TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB and supports a range of 1 MB to 2048 MB.  If you do not specify this value in the DDL command, default value is used.</p></li>
 </ul><p><code>
-  TBLPROPERTIES (&quot;TABLE_BLOCKSIZE&quot;=&quot;512 MB&quot;)
+    TBLPROPERTIES (&quot;TABLE_BLOCKSIZE&quot;=&quot;512 MB&quot;)
 </code></p><p>Here 512 MB means the block size of this table is 512 MB, you can also set it as 512M or 512.</p>
 <ul>
-  <li><p><strong>Inverted Index Configuration</strong></p><p>Inverted index is very useful to improve compression ratio and query speed, especially for those low-cardinality columns who are in reward position.  By default inverted index is enabled. The user can disable the inverted index creation for some columns.</p></li>
+    <li><p><strong>Inverted Index Configuration</strong></p><p>Inverted index is very useful to improve compression ratio and query speed, especially for those low-cardinality columns who are in reward position.  By default inverted index is enabled. The user can disable the inverted index creation for some columns.</p></li>
 </ul><p><code>
-  TBLPROPERTIES (&quot;NO_INVERTED_INDEX&quot;=&quot;column1, column3&quot;)
+    TBLPROPERTIES (&quot;NO_INVERTED_INDEX&quot;=&quot;column1, column3&quot;)
 </code></p><p>No inverted index shall be generated for the columns specified in NO_INVERTED_INDEX. This property is applicable on columns with high-cardinality and is an optional parameter.</p><p>NOTE:</p>
 <ul>
-  <li><p>By default all columns other than numeric datatype are treated as dimensions and all columns of numeric datatype are treated as measures.</p></li>
-  <li><p>All dimensions except complex datatype columns are part of multi dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. If the user wants to keep any column (except columns of complex datatype) in multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE or DICTIONARY_INCLUDE.</p><h3>Example:</h3>
-    <p><pre><code>
+    <li><p>By default all columns other than numeric datatype are treated as dimensions and all columns of numeric datatype are treated as measures.</p></li>
+    <li><p>All dimensions except complex datatype columns are part of multi dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. If the user wants to keep any column (except columns of complex datatype) in multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE or DICTIONARY_INCLUDE.</p><h3>Example:</h3>
+        <p><pre><code>
     CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
                                                                 productNumber Int,
                                                                 productName String,
@@ -101,147 +101,156 @@
                                                                 productBatch String,
                                                                 saleQuantity Int,
                                                                 revenue Int)
-                                                                STORED BY &#39;carbondata&#39;
-                                                                TBLPROPERTIES (&#39;COLUMN_GROUPS&#39;=&#39;(productNumber,productName)&#39;,
-                                                                &#39;DICTIONARY_EXCLUDE&#39;=&#39;storeCity&#39;,
-                                                                &#39;DICTIONARY_INCLUDE&#39;=&#39;productNumber&#39;,
-                                                                &#39;NO_INVERTED_INDEX&#39;=&#39;productBatch&#39;)
-  </code></pre></p></li>
-</ul><h2 id="show-table">SHOW TABLE</h2><p>This command can be used to list all the tables in current database or all the tables of a specific database. <code>
-  SHOW TABLES [IN db_Name];
-</code></p><h3>Parameter Description</h3>
+    STORED BY &#39;carbondata&#39;
+    TBLPROPERTIES (&#39;COLUMN_GROUPS&#39;=&#39;(productNumber,productName)&#39;,
+    &#39;DICTIONARY_EXCLUDE&#39;=&#39;storeCity&#39;,
+    &#39;DICTIONARY_INCLUDE&#39;=&#39;productNumber&#39;,
+    &#39;NO_INVERTED_INDEX&#39;=&#39;productBatch&#39;)
+  </code></pre>
+        </p></li>
+</ul><h2 id="show-table">SHOW TABLE</h2><p>This command can be used to list all the tables in
+    current database or all the tables of a specific database. <code>
+        SHOW TABLES [IN db_Name];
+    </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
-  <tr>
-    <th>Parameter </th>
-    <th>Description </th>
-    <th>Optional </th>
-  </tr>
-  </thead>
-  <tbody>
-  <tr>
-    <td>IN db_Name </td>
-    <td>Name of the database. Required only if tables of this specific database are to be listed. </td>
-    <td>Yes </td>
-  </tr>
-  </tbody>
+    <thead>
+    <tr>
+        <th>Parameter</th>
+        <th>Description</th>
+        <th>Optional</th>
+    </tr>
+    </thead>
+    <tbody>
+    <tr>
+        <td>IN db_Name </td>
+        <td>Name of the database. Required only if tables of this specific database are to be listed. </td>
+        <td>Yes </td>
+    </tr>
+    </tbody>
 </table><h3>Example:</h3><p><code>
-  SHOW TABLES IN ProductSchema;
+    SHOW TABLES IN ProductSchema;
 </code></p><h2 id="drop-table">DROP TABLE</h2><p>This command is used to delete an existing table.</p><p><code>
-  DROP TABLE [IF EXISTS] [db_name.]table_name;
+    DROP TABLE [IF EXISTS] [db_name.]table_name;
 </code></p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
-  <tr>
-    <th>Parameter </th>
-    <th>Description </th>
-    <th>Optional </th>
-  </tr>
-  </thead>
-  <tbody>
-  <tr>
-    <td>db_Name </td>
-    <td>Name of the database. If not specified, current database will be selected. </td>
-    <td>YES </td>
-  </tr>
-  <tr>
-    <td>table_name </td>
-    <td>Name of the table to be deleted. </td>
-    <td>NO </td>
-  </tr>
-  </tbody>
+    <thead>
+    <tr>
+        <th>Parameter </th>
+        <th>Description </th>
+        <th>Optional </th>
+    </tr>
+    </thead>
+    <tbody>
+    <tr>
+        <td>db_Name </td>
+        <td>Name of the database. If not specified, current database will be selected. </td>
+        <td>YES </td>
+    </tr>
+    <tr>
+        <td>table_name </td>
+        <td>Name of the table to be deleted. </td>
+        <td>NO </td>
+    </tr>
+    </tbody>
 </table><h3>Example:</h3><p><code>
-  DROP TABLE IF EXISTS productSchema.productSalesTable;
+    DROP TABLE IF EXISTS productSchema.productSalesTable;
 </code></p><h2 id="compaction">COMPACTION</h2><p>This command merges the specified number of segments into one segment. This enhances the query performance of the table.</p><p><code>
-  ALTER TABLE [db_name.]table_name COMPACT &#39;MINOR/MAJOR&#39;;
+    ALTER TABLE [db_name.]table_name COMPACT &#39;MINOR/MAJOR&#39;;
 </code></p><p>To get details about Compaction refer to Data Management</p><h3>Parameter Description</h3>
 <table class="table table-striped table-bordered">
-  <thead>
-  <tr>
-    <th>Parameter </th>
-    <th>Description </th>
-    <th>Optional </th>
-  </tr>
-  </thead>
-  <tbody>
-  <tr>
-    <td>db_name </td>
-    <td>Database name, if it is not specified then it uses current database. </td>
-    <td>YES </td>
-  </tr>
-  <tr>
-    <td>table_name </td>
-    <td>The name of the table in provided database.</td>
-    <td>NO </td>
-  </tr>
-  </tbody>
+    <thead>
+    <tr>
+        <th>Parameter </th>
+        <th>Description </th>
+        <th>Optional </th>
+    </tr>
+    </thead>
+    <tbody>
+    <tr>
+        <td>db_name </td>
+        <td>Database name, if it is not specified then it uses current database. </td>
+        <td>YES </td>
+    </tr>
+    <tr>
+        <td>table_name </td>
+        <td>The name of the table in provided database.</td>
+        <td>NO </td>
+    </tr>
+    </tbody>
 </table><h3>Syntax</h3>
 <ul>
-  <li><strong>Minor Compaction</strong></li>
-</ul><p><code>
-  ALTER TABLE table_name COMPACT &#39;MINOR&#39;;
-</code>
+    <li><strong>Minor Compaction</strong></li>
+    <p>
+        <code>
+            ALTER TABLE table_name COMPACT &#39;MINOR&#39;;
+        </code>
+    </p>
+</ul>
 <ul>
-  <li><strong>Major Compaction</strong></li></p><p><code>
-  ALTER TABLE table_name COMPACT &#39;MAJOR&#39;;
+    <li><strong>Major Compaction</strong></li>
+    </p><p><code>
+    ALTER TABLE table_name COMPACT &#39;MAJOR&#39;;
 </code></p>
 </ul>
- <h2 id="bucketing">BUCKETING</h2>
-<p>Bucketing feature can be used to distribute/organize the table/partition data into multiple files such that similar records are present in the same file. While creating a table, a user needs to specify the columns to be used for bucketing and the number of buckets. For the selction of bucket the Hash value of columns is used.</p><p>
-  <pre>
-  <code>CREATE TABLE [IF NOT EXISTS] [db_name.]table_name  [(col_name data_type, ...)]
-        STORED BY 'carbondata'  TBLPROPERTIES('BUCKETNUMBER'='noOfBuckets', 'BUCKETCOLUMNS'='columnname', 'TABLENAME'='tablename')
+<h2 id="bucketing">BUCKETING</h2>
+<p>Bucketing feature can be used to distribute/organize the table/partition data into multiple files
+    such that similar records are present in the same file. While creating a table, a user needs to
+    specify the columns to be used for bucketing and the number of buckets. For the selction of
+    bucket the Hash value of columns is used.</p><p>
+<pre>
+  <code>
+    CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+               [(col_name data_type, ...)]
+    STORED BY 'carbondata'
+    TBLPROPERTIES('BUCKETNUMBER'='noOfBuckets',
+    'BUCKETCOLUMNS'='columnname')
   </code>
 </pre>
-    </p><p></p><h2>Parameter Description</h2>
+</p><p></p><h2>Parameter Description</h2>
 <table class="table table-striped table-bordered">
-  <thead>
-  <tr>
-    <th>Parameter </th>
-    <th>Description </th>
-    <th>Optional </th>
-  </tr>
-  </thead>
-  <tbody>
-  <tr>
-    <td>BUCKETNUMBER </td>
-    <td>Specifies the number of Buckets to be created. </td>
-    <td>No </td>
-  </tr>
-  <tr>
-    <td>BUCKETCOLUMNS </td>
-    <td>Specify the columns to be considered for Bucketing </td>
-    <td>No </td>
-  </tr>
-  <tr>
-    <td>TABLENAME </td>
-    <td>The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. </td>
-    <td>Yes </td>
-  </tr>
-  </tbody>
+    <thead>
+    <tr>
+        <th>Parameter</th>
+        <th>Description</th>
+        <th>Optional</th>
+    </tr>
+    </thead>
+    <tbody>
+    <tr>
+        <td>BUCKETNUMBER</td>
+        <td>Specifies the number of Buckets to be created.</td>
+        <td>No</td>
+    </tr>
+    <tr>
+        <td>BUCKETCOLUMNS</td>
+        <td>Specify the columns to be considered for Bucketing</td>
+        <td>No</td>
+    </tr>
+    </tbody>
 </table><h2>Usage Guidelines</h2>
 <ul>
-  <li><p>The feature is supported for Spark 1.6.2 onwards, but the performance optimization is evident from Spark 2.1 onwards.</p></li>
-  <li><p>Bucketing can not be performed for columns of Complex Data Types.</p></li>
-  <li><p>Columns in the BUCKETCOLUMN parameter must be either a dimension or a measure but combination of both is not supported.</p></li>
+    <li><p>The feature is supported for Spark 1.6.2 onwards, but the performance optimization is
+        evident from Spark 2.1 onwards.</p></li>
+    <li><p>Bucketing can not be performed for columns of Complex Data Types.</p></li>
+    <li><p>Columns in the BUCKETCOLUMN parameter must be only dimension. The BUCKETCOLUMN parameter
+        can not be a measure or a combination of measures and dimensions.</p></li>
 </ul><h2>Example :</h2>
-<pre>
-  <code>CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-                                                                productNumber Int,
-                                                                productName String,
-                                                                storeCity String,
-                                                                storeProvince String,
-                                                                productCategory String,
-                                                                productBatch String,
-                                                                saleQuantity Int,
-                                                                revenue Int)
-                                                                STORED BY 'carbondata'
-                                                                TBLPROPERTIES ('COLUMN_GROUPS'='(productName,productCategory)',
-                                                                'DICTIONARY_EXCLUDE'='productName',
-                                                                'DICTIONARY_INCLUDE'='productNumber',
-                                                                'NO_INVERTED_INDEX'='productBatch',
-                                                                'BUCKETNUMBER'='4',
-                                                                'BUCKETCOLUMNS'='productNumber,saleQuantity',
-                                                                'TABLENAME'='productSalesTable')
-  </code>
-</pre>
\ No newline at end of file
+<pre><code>
+  CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                                productNumber Int,
+                                productName String,
+                                storeCity String,
+                                storeProvince String,
+                                productCategory String,
+                                productBatch String,
+                                saleQuantity Int,
+                                revenue Int)
+   STORED BY 'carbondata'
+   TBLPROPERTIES ('COLUMN_GROUPS'='(productName,productNumber)',
+                  'DICTIONARY_EXCLUDE'='productName',
+                  'DICTIONARY_INCLUDE'='productNumber',
+                  'NO_INVERTED_INDEX'='productBatch',
+                  'BUCKETNUMBER'='4',
+                  'BUCKETCOLUMNS'='productName')
+
+ </code></pre>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/main/webapp/docs/latest/faq.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/faq.html b/src/main/webapp/docs/latest/faq.html
index d817823..6faf8f4 100644
--- a/src/main/webapp/docs/latest/faq.html
+++ b/src/main/webapp/docs/latest/faq.html
@@ -16,187 +16,51 @@
     specific language governing permissions and limitations
     under the License.
 -->
-<h1 id="faqs">FAQs</h1>
+<h1><a id="FAQs_0"></a>FAQs</h1>
 <ul>
-    <li>
-        <p><a href="#can-we-preserve-segments-from-compaction">Can we preserve Segments from
-            Compaction?</a></p>
-    </li>
-    <li>
-        <p><a href="#can-we-disable-horizontal-compaction">Can we disable horizontal compaction?</a></p>
-    </li>
-    <li>
-        <p><a href="#what-is-horizontal-compaction">What is horizontal compaction?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-enable-compaction-while-data-loading">How to enable Compaction while data
-            loading?</a></p>
-    </li>
-    <li>
-        <p><a href="#where-are-bad-records-stored-in-carbondata">Where are Bad Records Stored in
-            CarbonData?</a></p>
-    </li>
-    <li>
-        <p><a href="#what-are-bad-records">What are Bad Records?</a></p>
-    </li>
-    <li>
-        <p><a href="#can-we-use-carbondata-on-standalone-spark-cluster">Can we use CarbonData on
-            Standalone Spark Cluster?</a></p>
-    </li>
-    <li>
-        <p><a href="#what-versions-of-apache-spark-are-compatible-with-carbondata">What versions of
-            Apache Spark are Compatible with CarbonData?</a></p>
-    </li>
-    <li>
-        <p><a href="#can-we-load-data-from-excel">Can we Load Data from excel?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-enable-single-pass-data-loading">How to enable Single Pass Data Loading?</a>
-        </p>
-    </li>
-    <li>
-        <p><a href="#what-is-single-pass-data-loading">What is Single Pass Data Loading?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-specify-the-data-loading-format-for-carbondata">How to specify the data
-            loading format for CarbonData ?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-resolve-store-location-can-not-be-found">How to resolve store location can\u2019t
-            be found?</a></p>
-    </li>
-    <li>
-        <p><a href="">What is carbon.lock.type?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-enable-auto-compaction">How to enable Auto Compaction?</a></p>
-    </li>
-    <li>
-        <p><a href="#how-to-resolve-abstract-method-error">How to resolve Abstract Method Error?</a></p>
-    </li>
-    <li>
-        <p><a href="#getting-exception-on-creating-a-view">Getting Exception on Creating a View</a></p>
-    </li>
-    <li>
-        <p><a href="#is-carbondata-supported-for-windows">Is CarbonData supported for Windows?</a></p>
-    </li>
-
+    <li><a href="#what-are-bad-records">What are Bad Records?</a></li>
+    <li><a href="#where-are-bad-records-stored-in-carbondata">Where are Bad Records Stored in CarbonData?</a></li>
+    <li><a href="#how-to-enable-bad-record-logging">How to enable Bad Record Logging?</a></li>
+    <li><a href="#how-to-ignore-the-bad-records">How to ignore the Bad Records?</a></li>
+    <li><a href="#how-to-specify-store-location-while-creating-carbon-session">How to specify store location while creating carbon session?</a></li>
+    <li><a href="#what-is-carbon-lock-type">What is Carbon Lock Type?</a></li>
+    <li><a href="#how-to-resolve-abstract-method-error">How to resolve Abstract Method Error?</a></li>
 </ul>
-
-<h2 id="can-we-preserve-segments-from-compaction">Can we preserve Segments from Compaction?</h2>
-<p>If you want to preserve number of segments from being compacted then you can set the property
-    <strong>carbon.numberof.preserve.segments</strong> equal to the <strong>value of number of
-        segments to be preserved</strong>.</p>
-<p>Note : <em>No segments are preserved by Default.</em></p>
-
-<h2 id="can-we-disable-horizontal-compaction">Can we disable horizontal compaction?</h2>
-<p>Yes, to disable horizontal compaction, set <strong>carbon.horizontal.compaction.enable</strong>
-    to <code>FALSE</code> in carbon.properties file.</p>
-
-<h2 id="what-is-horizontal-compaction">What is horizontal compaction?</h2>
-<p>Compaction performed after Update and Delete operations is referred as Horizontal Compaction.
-    After every DELETE and UPDATE operation, horizontal compaction may occur in case the delta
-    (DELETE/ UPDATE) files becomes more than specified threshold.</p>
-<p>By default the parameter <strong>carbon.horizontal.compaction.enable</strong> enabling the
-    horizontal compaction is set to <code>TRUE</code>.</p>
-
-<h2 id="how-to-enable-compaction-while-data-loading">How to enable Compaction while data
-    loading?</h2>
-<p>To enable compaction while data loading, set <strong>carbon.enable.auto.load.merge</strong> to
-    <code>TRUE</code> in carbon.properties file.</p>
-
-<h2 id="where-are-bad-records-stored-in-carbondata">Where are Bad Records Stored in CarbonData?</h2>
-<p>The bad records are stored at the location set in carbon.badRecords.location in carbon.properties
-    file.<br>
-    By default <strong>carbon.badRecords.location</strong> specifies the following location <code>/opt/Carbon/Spark/badrecords</code>.
-</p>
-
-<h2 id="what-are-bad-records">What are Bad Records?</h2>
-<p>Records that fail to get loaded into the CarbonData due to data type incompatibility are
-    classified as Bad Records.</p>
-
-<h2 id="can-we-use-carbondata-on-standalone-spark-cluster">Can we use CarbonData on Standalone Spark
-    Cluster?</h2>
-<p>Yes, CarbonData can be used on a Standalone spark cluster. But using a standalone cluster has
-    following limitations:</p>
+<h2 id="what-are-bad-records"><a id="What_are_Bad_Records_10"></a>What are Bad Records?</h2>
+<p>Records that fail to get loaded into the CarbonData due to data type incompatibility or are empty or have incompatible format are classified as Bad Records.</p>
+<h2 id="where-are-bad-records-stored-in-carbondata"><a id="Where_are_Bad_Records_Stored_in_CarbonData_13"></a>Where are Bad Records Stored in CarbonData?</h2>
+<p>The bad records are stored at the location set in carbon.badRecords.location in carbon.properties file.<br>
+    By default <strong>carbon.badRecords.location</strong> specifies the following location <code>/opt/Carbon/Spark/badrecords</code>.</p>
+<h2 id="how-to-enable-bad-record-logging"><a id="How_to_enable_Bad_Record_Logging_17"></a>How to enable Bad Record Logging?</h2>
+<p>While loading data we can specify the approach to handle Bad Records. In order to analyse the cause of the Bad Records the parameter <code>BAD_RECORDS_LOGGER_ENABLE</code> must be set to value <code>TRUE</code>. There are multiple approaches to handle Bad Records which can be specified  by the parameter <code>BAD_RECORDS_ACTION</code>.</p>
 <ul>
-    <li>single node cluster cannot be scaled up</li>
-    <li>the maximum memory and the CPU computation power has a fixed limit</li>
-    <li>the number of processors are limited in a single node cluster</li>
+    <li>To pad the incorrect values of the csv rows with NULL value and load the data in CarbonData, set the following in the query :</li>
 </ul>
-<p>To harness the actual speed of execution of CarbonData on petabytes of data, it is suggested to
-    use a Multinode Cluster.</p>
-
-<h2 id="what-versions-of-apache-spark-are-compatible-with-carbondata">What versions of Apache Spark
-    are Compatible with CarbonData?</h2>
-<p>Currently <strong>Spark 1.6.2</strong> and <strong>Spark 2.1</strong> is compatible with
-    CarbonData.</p>
-
-<h2 id="can-we-load-data-from-excel">Can we Load Data from excel?</h2>
-<p>Yes, the data can be loaded from excel provided the data is in CSV format.</p>
-
-<h2 id="how-to-enable-single-pass-data-loading">How to enable Single Pass Data Loading?</h2>
-<p>You need to set <strong>SINGLE_PASS</strong> to <code>True</code> and append it to
-    <code>OPTIONS</code> Section in the query as demonstrated in the Load Query below :</p>
-<pre><code>LOAD DATA local inpath '/opt/rawdata/data.csv' INTO table carbontable
-OPTIONS('DELIMITER'=',', 'QUOTECHAR'='&quot;','FILEHEADER'='empno,empname,designation','USE_KETTLE'='FALSE')
+<pre><code>'BAD_RECORDS_ACTION'='FORCE'
 </code></pre>
-<p>Refer to <a
-        href="https://github.com/PallaviSingh1992/incubator-carbondata/blob/6b4dd5f3dea8c93839a94c2d2c80ab7a799cf209/docs/dml-operation-on-carbondata.md">DML-operations-in-CarbonData</a>
-    for more details and example.</p>
-
-<h2 id="what-is-single-pass-data-loading">What is Single Pass Data Loading?</h2>
-<p>Single Pass Loading enables single job to finish data loading with dictionary generation on the
-    fly. It enhances performance in the scenarios where the subsequent data loading after initial
-    load involves fewer incremental updates on the dictionary.<br>
-    This option specifies whether to use single pass for loading data or not. By default this option
-    is set to <code>FALSE</code>.</p>
-
-<h2 id="how-to-specify-the-data-loading-format-for-carbondata">How to specify the data loading
-    format for CarbonData?</h2>
-<p>Edit carbon.properties file. Modify the value of parameter
-    <strong>carbon.data.file.version</strong>.<br>
-    Setting the parameter <strong>carbon.data.file.version</strong> to <code>1</code> will support
-    data loading in <code>old format(0.x version)</code> and setting <strong>carbon.data.file.version</strong>
-    to <code>2</code> will support data loading in <code>new format(1.x onwards)</code> only.<br>
-    By default the data loading is supported using the new format.</p>
-
-<h2 id="how-to-resolve-store-location-can-not-be-found">How to resolve store location can not be
-    found?</h2>
-<p>Try creating <code>carbonsession</code> with <code>storepath</code> specified in the following
-    manner :</p>
+<ul>
+    <li>To write the Bad Records without padding incorrect values with NULL in the raw csv (set in the parameter <strong>carbon.badRecords.location</strong>), set the following in the query :</li>
+</ul>
+<pre><code>'BAD_RECORDS_ACTION'='REDIRECT'
+</code></pre>
+<h2 id="how-to-ignore-the-bad-records"><a id="How_to_ignore_the_Bad_Records_30"></a>How to ignore the Bad Records?</h2>
+<p>To ignore the Bad Records from getting stored in the raw csv, we need to set the following in the query :</p>
+<pre><code>'BAD_RECORDS_ACTION'='IGNORE'
+</code></pre>
+<h2 id="how-to-specify-store-location-while-creating-carbon-session"><a id="How_to_specify_store_location_while_creating_carbon_session_36"></a>How to specify store location while creating carbon session?</h2>
+<p>The store location specified while creating carbon session is used by the CarbonData to store the meta data like the schema, dictionary files, dictionary meta data and sort indexes.</p>
+<p>Try creating <code>carbonsession</code> with <code>storepath</code> specified in the following manner :</p>
 <pre><code>val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&lt;store_path&gt;)
 </code></pre>
 <p>Example:</p>
 <pre><code>val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&quot;hdfs://localhost:9000/carbon/store &quot;)
 </code></pre>
-
-<h2 id="what-is-carbon-lockt-ype">What is carbon.lock.type?</h2>
-<p>This property configuration specifies the type of lock to be acquired during concurrent
-    operations on table. This property can be set with the following values :</p>
+<h2 id="what-is-carbon-lock-type"><a id="What_is_Carbon_Lock_Type_48"></a>What is Carbon Lock Type?</h2>
+<p>The Apache CarbonData acquires lock on the files to prevent concurrent operation from modifying the same files. The lock can be of the following types depending on the storage location, for HDFS we specify it to be of type HDFSLOCK. By default it is set to type LOCALLOCK.<br>
+    The property carbon.lock.type configuration specifies the type of lock to be acquired during concurrent operations on table. This property can be set with the following values :</p>
 <ul>
-    <li><strong>LOCALLOCK</strong> : This Lock is created on local file system as file. This lock is
-        useful when only one spark driver (thrift server) runs on a machine and no other CarbonData
-        spark application is launched concurrently.
-    </li>
-    <li><strong>HDFSLOCK</strong> : This Lock is created on HDFS file system as file. This lock is
-        useful when multiple CarbonData spark applications are launched and no ZooKeeper is running
-        on cluster and the HDFS supports, file based locking.
-    </li>
+    <li><strong>LOCALLOCK</strong> : This Lock is created on local file system as file. This lock is useful when only one spark driver (thrift server) runs on a machine and no other CarbonData spark application is launched concurrently.</li>
+    <li><strong>HDFSLOCK</strong> : This Lock is created on HDFS file system as file. This lock is useful when multiple CarbonData spark applications are launched and no ZooKeeper is running on cluster and the HDFS supports, file based locking.</li>
 </ul>
-
-<h2 id="how-to-enable-auto-compaction">How to enable Auto Compaction?</h2>
-<p>To enable compaction set <strong>carbon.enable.auto.load.merge</strong> to <code>TRUE</code> in
-    the carbon.properties file.</p>
-
-<h2 id="how-to-resolve-abstract-method-error">How to resolve Abstract Method Error?</h2>
-<p>You need to specify the <code>spark version</code> while using Maven to build project.</p>
-
-<h2 id="getting-exception-on-creating-a-view">Getting Exception on Creating a View</h2>
-<p>View not supported in CarbonData.</p>
-
-<h2 id="is-carbondata-supported-for-windows">Is CarbonData supported for Windows?</h2>
-<p>We may provide support for windows in future. You are welcome to contribute if you want to add
-    the support :)</p>
-
-</body></html>
+<h2 id="how-to-resolve-abstract-method-error"><a id="How_to_resolve_Abstract_Method_Error_54"></a>How to resolve Abstract Method Error?</h2>
+<p>In order to build CarbonData project it is necessary to specify the spark profile. The spark profile sets the Spark Version. You need to specify the <code>spark version</code> while using Maven to build project.</p>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/main/webapp/docs/latest/quick-start-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/quick-start-guide.html b/src/main/webapp/docs/latest/quick-start-guide.html
index 6f6a864..8be0423 100644
--- a/src/main/webapp/docs/latest/quick-start-guide.html
+++ b/src/main/webapp/docs/latest/quick-start-guide.html
@@ -49,9 +49,16 @@
 </code></p>
 <ul>
     <li>Create a CarbonSession :</li>
-</ul><p><code>
-    val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&quot;&lt;hdfs store path&gt;&quot;)
-</code></p><h4>Executing Queries</h4><h5>Creating a Table</h5><p><code>
+</ul>
+<pre><code>val carbon = SparkSession
+            .builder()
+            .config(sc.getConf)
+            .getOrCreateCarbonSession(&quot;&lt;hdfs store path&gt;&quot;)
+</code></pre>
+<p>NOTE: By default metastore location is pointed to \u201c\u2026/carbon.metastore\u201d, user can provide own metastore location to CarbonSession like</p>
+<pre><code>`SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&quot;&lt;hdfs store path&gt;&quot;, &quot;&lt;local metastore path&gt;&quot;)`
+</code></pre>
+<h4>Executing Queries</h4><h5>Creating a Table</h5><p><code>
     scala&gt;carbon.sql(&quot;CREATE TABLE IF NOT EXISTS test_table(id string, name string, city string, age Int) STORED BY &#39;carbondata&#39;&quot;)
 </code></p><h5>Loading Data to a Table</h5><p><code>
     scala&gt;carbon.sql(&quot;LOAD DATA INPATH &#39;sample.csv file path&#39; INTO TABLE test_table&quot;)

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/main/webapp/docs/latest/troubleshooting.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/troubleshooting.html b/src/main/webapp/docs/latest/troubleshooting.html
index 4416f16..4c728c6 100644
--- a/src/main/webapp/docs/latest/troubleshooting.html
+++ b/src/main/webapp/docs/latest/troubleshooting.html
@@ -18,28 +18,6 @@
 --><h1>Troubleshooting</h1>
 <p>This tutorial is designed to provide troubleshooting for end users and developers
     who are building, deploying, and using CarbonData.</p>
-<ul>
-    <li><a href="#failed-to-load-thrift-libraries">Failed to load thrift libraries</a></li>
-    <li><a href="#failed-to-launch-the-spark-shell">Failed to launch the Spark Shell</a></li>
-    <li><a href="#query-failure-with-generic-error-on-the-beeline">Query Failure with Generic Error
-        on the Beeline</a></li>
-    <li><a href="#failed-to-execute-load-query-on-cluster">Failed to execute load query on
-        cluster</a></li>
-    <li><a href="#failed-to-execute-insert-query-on-cluster">Failed to execute insert query on
-        cluster</a></li>
-    <li><a href="#failed-to-connect-to-hiveuser-with-thrift">Failed to connect to hiveuser with
-        thrift</a></li>
-    <li><a href="#failure-to-read-the-metastore-db-during-table-creation">Failure to read the
-        metastore db during table creation</a></li>
-    <li><a href="#failed-to-load-data-on-the-cluster">Failed to load data on the cluster</a></li>
-    <li><a href="#failed-to-insert-data-on-the-cluster">Failed to insert data on the cluster</a>
-    </li>
-    <li><a href="#failed-to-execute-concurrent-operations">Failed to execute Concurrent
-        Operations</a></li>
-    <li><a href="#failed-to-create-a-table-with-a-single-numeric-column">Failed to create a table
-        with a single numeric column</a></li>
-    <li><a href="#data-failure-because-of-bad-records">Data Failure because of Bad Records</a></li>
-</ul>
 <h2 id="failed-to-load-thrift-libraries">Failed to load thrift libraries</h2>
 <p><strong>Symptom</strong></p>
 <p>Thrift throws following exception :</p>
@@ -70,8 +48,7 @@ libthriftc.so.0: cannot open shared object file: No such file or directory
 </code></pre>
     </li>
 </ol>
-<pre><code>Note : Remember to add only the path to the directory, not the full path for that file, all the libraries inside that path will be automatically indexed.
-</code></pre>
+Note : Remember to add only the path to the directory, not the full path for that file, all the libraries inside that path will be automatically indexed.
 <h2 id="failed-to-launch-the-spark-shell">Failed to launch the Spark Shell</h2>
 <p><strong>Symptom</strong></p>
 <p>The shell prompts the following error :</p>
@@ -90,56 +67,11 @@ $OverrideCatalog$$overrides_$e
         <p>Use the following command :</p>
     </li>
 </ol>
-<pre><code>```
+<pre><code>
  &quot;mvn -Pspark-2.1 -Dspark.version {yourSparkVersion} clean package&quot;
-```
 
-Note :  Refrain from using &quot;mvn clean package&quot; without specifying the profile.
 </code></pre>
-<h2 id="query-failure-with-generic-error-on-the-beeline">Query Failure with Generic Error
-    on the Beeline</h2>
-<p><strong>Symptom</strong></p>
-<p>Query fails on the executor side and generic error message is printed on the beeline console</p>
-<p><img src="../../../webapp/docs/latest/images/query_failure_beeline.png?raw=true"
-        alt="Query Failure Beeline"></p>
-<p><strong>Possible Causes</strong></p>
-<ul>
-    <li>In Query flow, Table B-Tree will be loaded into memory on the driver side and filter
-        condition is validated against the min-max of each block to identify false positive,<br>
-        Once the blocks are selected, based on number of available executors, blocks will be
-        distributed to each executor node as shown in below driver logs snapshot
-    </li>
-</ul>
-<p><img src="../../../webapp/docs/latest/images/query_failure_logs.png?raw=true"
-        alt="Query Failure Logs"></p>
-<ul>
-    <li>
-        <p>When the error occurs in driver side while b-tree loading or block distribution, detail
-            error message will be printed on the beeline console and error trace will be printed on
-            the driver logs.</p>
-    </li>
-    <li>
-        <p>When the error occurs in the executor side, generic error message will be printed as
-            shown in issue description.</p>
-    </li>
-</ul>
-<p><img src="../../../webapp/docs/latest/images/query_failure_job_details.png?raw=true"
-        alt="Query Failure Job Details"></p>
-<ul>
-    <li>Details of the failed stages can be seen in the Spark Application UI by clicking on the
-        failed stages on the failed job as shown in previous snapshot
-    </li>
-</ul>
-<p><img src="../../../webapp/docs/latest/images/query_failure_spark_ui.png?raw=true"
-        alt="Query Failure Spark UI"></p>
-<p><strong>Procedure</strong></p>
-<p>Details of the error can be analyzed in details using executor logs available in stdout</p>
-<p><img src="../../../webapp/docs/latest/images/query_failure_procedure.png?raw=true"
-        alt="Query Failure Spark UI"></p>
-<p>Below snapshot shows executor logs with error message for query failure which can be helpful to
-    locate the error</p>
-<p><img src="../../../webapp/docs/latest/images/query_failure_issue.png?raw=true"
-        alt="Query Failure Spark UI"></p>
+Note :  Refrain from using &quot;mvn clean package&quot; without specifying the profile.
 <h2 id="failed-to-execute-load-query-on-cluster">Failed to execute load query on cluster.
 </h2>
 <p><strong>Symptom</strong></p>
@@ -282,16 +214,4 @@ Note :  Refrain from using &quot;mvn clean package&quot; without specifying the
 <p>Behavior not supported.</p>
 <p><strong>Procedure</strong></p>
 <p>A single column that can be considered as dimension is mandatory for table creation.</p>
-<h2 id="data-failure-because-of-bad-records">Data Failure because of Bad Records</h2>
-<p><strong>Symptom</strong></p>
-<p>Data Loading fails with the following exception</p>
-<pre><code>Error: java.lang.Exception: Data load failed due to Bad record
-</code></pre>
-<p><strong>Possible Causes</strong></p>
-<p>The parameter BAD_RECORDS_ACTION has not been specified in the Query.</p>
-<p><strong>Procedure</strong></p>
-<p>Set the following parameter in the load command OPTIONS as shown below :</p>
-<p>\u2018BAD_RECORDS_ACTION\u2019='FORCE\u2018</p>
-<p><em>Example :</em></p>
-<pre><code>LOAD DATA INPATH 'hdfs://hacluster/user/loader/moredata01.csv' INTO TABLE flow_carbon_256b OPTIONS('DELIMITER'=',', 'BAD_RECORDS_ACTION'='FORCE');
-</code></pre>
+