You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@carbondata.apache.org by ch...@apache.org on 2016/06/29 13:05:05 UTC

[1/4] incubator-carbondata git commit: Added documentation of carbondata including Build, IDE, DDL, DML and interfaces

Repository: incubator-carbondata
Updated Branches:
  refs/heads/master 42f665956 -> c49833c81


Added documentation of carbondata including Build,IDE, DDL,DML and interfaces


Project: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/commit/5d19fc08
Tree: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/tree/5d19fc08
Diff: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/diff/5d19fc08

Branch: refs/heads/master
Commit: 5d19fc08b9a1d42a39db2c0ce7bec65ab7a7277e
Parents: 42f6659
Author: ravipesala <ra...@gmail.com>
Authored: Wed Jun 29 18:00:33 2016 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Wed Jun 29 18:00:33 2016 +0530

----------------------------------------------------------------------
 docs/Carbon-Interfaces.md                       |  72 +++++++
 docs/Carbondata-File-Structure-and-Format.md    |  36 ++++
 docs/Carbondata-Management.md                   | 144 +++++++++++++
 docs/DDL-Operations-on-Carbon.md                | 177 +++++++++++++++
 docs/DML-Operations-on-Carbon.md                | 216 +++++++++++++++++++
 ...stalling-CarbonData-And-IDE-Configuartion.md |  66 ++++++
 .../format/carbon_data_file_structure_new.png   | Bin 0 -> 78374 bytes
 docs/images/format/carbon_data_format_new.png   | Bin 0 -> 73708 bytes
 docs/images/format/carbon_data_full_scan.png    | Bin 0 -> 35710 bytes
 docs/images/format/carbon_data_motivation.png   | Bin 0 -> 25388 bytes
 docs/images/format/carbon_data_olap_scan.png    | Bin 0 -> 45235 bytes
 docs/images/format/carbon_data_random_scan.png  | Bin 0 -> 46317 bytes
 12 files changed, 711 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/Carbon-Interfaces.md
----------------------------------------------------------------------
diff --git a/docs/Carbon-Interfaces.md b/docs/Carbon-Interfaces.md
new file mode 100644
index 0000000..dcd3c4a
--- /dev/null
+++ b/docs/Carbon-Interfaces.md
@@ -0,0 +1,72 @@
+## Packaging
+Carbon provides following JAR packages:
+
+![carbon modules2](https://cloud.githubusercontent.com/assets/6500698/14255195/831c6e90-fac5-11e5-87ab-3b16d84918fb.png)
+
+- carbon-store.jar or carbondata-assembly.jar: This is the main Jar for carbon project, the target user of it are both user and developer. 
+      - For MapReduce application users, this jar provides API to read and write carbon files through CarbonInput/OutputFormat in carbon-hadoop module.
+      - For developer, this jar can be used to integrate carbon with processing engine like spark and hive, by leveraging API in carbon-processing module.
+
+- carbon-spark.jar(Currently it is part of assembly jar): provides support for spark user, spark user can manipulate carbon data files by using native spark DataFrame/SQL interface. Apart from this, in order to leverage carbon's builtin lifecycle management function, higher level concept like Managed Carbon Table, Database and corresponding DDL are introduced.
+
+- carbon-hive.jar(not yet provided): similar to carbon-spark, which provides integration to carbon and hive.
+
+## API
+Carbon can be used in following scenarios:
+### 1. For MapReduce application user
+This User API is provided by carbon-hadoop. In this scenario, user can process carbon files in his MapReduce application by choosing CarbonInput/OutputFormat, and is responsible using it correctly.Currently only CarbonInputFormat is provided and OutputFormat will be provided soon.
+
+
+### 2. For Spark user 
+This User API is provided by the Spark itself. There are also two levels of APIs
+-  **Carbon File**
+
+Similar to parquet, json, or other data source in Spark, carbon can be used with data source API. For example(please refer to DataFrameAPIExample for the more detail):
+```
+// User can create a DataFrame from any data source or transformation.
+val df = ...
+
+// Write data
+// User can write a DataFrame to a carbon file
+ df.write
+   .format("org.apache.spark.sql.CarbonSource")
+   .option("tableName", "carbontable")
+   .mode(SaveMode.Overwrite)
+   .save()
+
+
+// read carbon data by data source API
+df = carbonContext.read
+  .format("org.apache.spark.sql.CarbonSource")
+  .option("tableName", "carbontable")
+  .load("/path")
+
+// User can then use DataFrame for analysis
+df.count
+SVMWithSGD.train(df, numIterations)
+
+// User can also register the DataFrame with a table name, and use SQL for analysis
+df.registerTempTable("t1")  // register temporary table in SparkSQL catalog
+df.registerHiveTable("t2")  // Or, use a implicit funtion to register to Hive metastore
+sqlContext.sql("select count(*) from t1").show
+```
+
+- **Managed Carbon Table**
+
+Since carbon has builtin support for high level concept like Table, Database, and supports full data lifecycle management, instead of dealing with just files, user can use carbon specific DDL to manipulate data in Table and Database level. Please refer [DDL](https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DDL) and [DML] (https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DML)
+
+For example:
+```
+// Use SQL to manage table and query data
+carbonContext.sql("create database db1")
+carbonContext.sql("use database db1")
+carbonContext.sql("show databases")
+carbonContext.sql("create table tbl1 using org.carbondata.spark")
+carbonContext.sql("load data into table tlb1 path 'some_files'")
+carbonContext.sql("select count(*) from tbl1")
+```
+
+### 3. For developer
+For developer who want to integrate carbon into a processing engine like spark/hive/flink, use API provided by carbon-hadoop and carbon-processing:
+  - Query: integrate carbon-hadoop with engine specific API, like spark data source API 
+  - Data life cycle management: carbon provides utility functions in carbon-processing to manage data life cycle, like data loading, compact, retention, schema evolution. Developer can implement DDLs of their choice and leverage these utility function to do data life cycle management.

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/Carbondata-File-Structure-and-Format.md
----------------------------------------------------------------------
diff --git a/docs/Carbondata-File-Structure-and-Format.md b/docs/Carbondata-File-Structure-and-Format.md
new file mode 100644
index 0000000..e5d97b8
--- /dev/null
+++ b/docs/Carbondata-File-Structure-and-Format.md
@@ -0,0 +1,36 @@
+## Use Case & Motivation :  Why introducing a new file format?
+The motivation behind CarbonData is to create a single file format for all kind of query and analysis on Big Data. Existing data storage formats in Hadoop address only specific use cases requiring users to use multiple file formats for various types of queries resulting in unnecessary duplication of data. 
+
+### Sequential Access / Big Scan
+Such queries select only a few columns with a group by clause but do not contain any filters. This results in full scan over the complete store for the selected columns.  
+[[/images/format/carbon_data_full_scan.png|Full Scan Query]]
+
+### OLAP Style Query / Multi-dimensional Analysis
+These are queries which are typically fired from Interactive Analysis tools. Such queries often select a few columns but involve filters and group by on a column or a grouping expression. 
+[[/images/format/carbon_data_olap_scan.png|OLAP Scan Query]]
+
+
+### Random Access / Narrow Scan
+These are queries used from operational applications and usually select all or most of the columns but do involve a large number of filters which reduce the result to a small size. Such queries generally do not involve any aggregation or group by clause.  
+[[/images/format/carbon_data_random_scan.png|Random Scan Query]]
+
+### Single Format to provide low latency response for all usecases
+The main motivation behind CarbonData is to provide a single storage format for all the usecases of querying big data on Hadoop. Thus CarbonData is able to cover all use-cases into a single storage format.
+[[/images/format/carbon_data_motivation.png|Motivation]]
+
+
+## CarbonData File Structure
+CarbonData file contains groups of data called blocklet, along with all required information like schema, offsets and indices, etc, in a file footer.
+
+The file footer can be read once to build the indices in memory, which can be utilized for optimizing the scans and processing for all subsequent queries.
+
+Each blocklet in the file is further divided into chunks of data called Data Chunks. Each data chunk is organized either in columnar format or row format, and stores the data of either a single column or a set of columns. All blocklets in one file contain the same number and type of Data Chunks.
+
+[[/images/format/carbon_data_file_structure_new.png|Carbon File Structure]]
+
+Each Data Chunk contains multiple groups of data called as Pages. There are three types of pages.
+* Data Page: Contains the encoded data of a column/group of columns.
+* Row ID Page (optional): Contains the row id mappings used when the Data Page is stored as an inverted index.
+* RLE Page (optional): Contains additional metadata used when the Data Page in RLE coded.
+
+[[/images/format/carbon_data_format_new.png|Carbon File Format]]

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/Carbondata-Management.md
----------------------------------------------------------------------
diff --git a/docs/Carbondata-Management.md b/docs/Carbondata-Management.md
new file mode 100644
index 0000000..06b7cc7
--- /dev/null
+++ b/docs/Carbondata-Management.md
@@ -0,0 +1,144 @@
+
+* [Load Data](#Load Data)
+* [Deleting Data](#Deleting Data)
+* [Compacting Data](#Compacting Data)
+
+
+***
+
+
+# Load Data
+### Scenario
+Once the table is created, data can be loaded into table using LOAD DATA command and will be available for query. When data load is triggered, the data is encoded in Carbon format and copied into HDFS Carbon store path(mentioned in carbon.properties file) in compressed, multi dimentional columnar format for quick analysis queries.
+The same command can be used for loading the new data or to update the existing data.
+Only one data load can be triggered for one table. The high cardinality columns of the dictionary encoding are automatically recognized and these columns will not be used as dictionary encoding.
+
+### Prerequisite
+
+ The Table must be created.
+
+### Procedure
+
+Data loading is a process that involves execution of various steps to read, sort, and encode the date in Carbon store format. Each step will be executed in different threads.
+After data loading process is complete, the status (success/partial success) will be updated to Carbon store metadata. Following are the data load status:
+
+1. Success: All the data is loaded into table and no bad records found.
+2. Partial Success: Data is loaded into table and bad records are found. Bad records are stored at carbon.badrecords.location.
+
+In case of failure, the error will be logged in error log.
+Details of loads can be seen with SHOW SEGMENTS command.
+* Sequence Id
+* Status of data load
+* Load Start time
+* Load End time
+Following steps needs to be performed for invoking data load.
+Run the following command for historical data load:
+Command:
+```ruby
+LOAD DATA [LOCAL] INPATH 'folder_path' [OVERWRITE] INTO TABLE [db_name.]table_name
+OPTIONS(property_name=property_value, ...)
+```
+OPTIONS are also mandatory for data loading process. Inside OPTIONS user can provide either of any options like DELIMITER,QUOTECHAR, ESCAPERCHAR,MULTILINE as per need.
+
+Note: The path shall be canonical path.
+
+***
+
+# Deleting Data
+### Scenario
+If you have loaded wrong data into the table, or too many bad records and wanted to modify and reload the data, you can delete required loads data. The load can be deleted using the load ID or if the table contains date field then the data can be deleted using the date field.
+
+### Delete by Segment ID
+
+Each segment has a unique segment ID associated with it. Using this segment ID, you can remove the segment.
+Run the following command to get the segmentID.
+Command:
+```ruby
+SHOW SEGMENTS FOR Table dbname.tablename LIMIT number_of_segments
+```
+Example:
+```ruby
+SHOW SEGMENTS FOR TABLE carbonTable
+```
+The above command will show all the segments of the table carbonTable.
+```ruby
+SHOW SEGMENTS FOR TABLE carbonTable LIMIT 3
+```
+The above DDL will show only limited number of segments specified by number_of_segments.
+
+output: 
+
+| SegmentSequenceId | Status | Load Start Time | Load End Time | 
+|--------------|-----------------|--------------------|--------------------| 
+| 2| Success | 2015-11-19 20:25:... | 2015-11-19 20:49:... | 
+| 1| Marked for Delete | 2015-11-19 19:54:... | 2015-11-19 20:08:... | 
+| 0| Marked for Update | 2015-11-19 19:14:... | 2015-11-19 19:14:... | 
+ 
+The show segment command output consists of SegmentSequenceID, START_TIME OF LOAD, END_TIME OF LOAD, and LOAD STATUS. The latest load will be displayed first in the output.
+After you get the segment ID of the segment that you want to delete, execute the following command to delete the selected segment.
+Command:
+```ruby
+DELETE SEGMENT segment_sequence_id1, segments_sequence_id2, .... FROM TABLE tableName
+```
+Example:
+```ruby
+DELETE SEGMENT l,2,3 FROM TABLE carbonTable
+```
+
+### Delete by Date Field
+
+If the table contains date field, you can delete the data based on a specific date.
+Command:
+```ruby
+DELETE FROM TABLE [schema_name.]table_name WHERE[DATE_FIELD]BEFORE [DATE_VALUE]
+```
+Example:
+```ruby
+DELETE FROM TABLE table_name WHERE productionDate BEFORE '2017-07-01'
+```
+Here productionDate is the column of type time stamp.
+The above DDL will delete all the data before the date '2017-07-01'.
+
+
+Note: 
+* When the delete segment DML is called, segment will not be deleted physically from the file system. Instead the segment status will be marked as "Marked for Delete". For the query execution, this deleted segment will be excluded.
+* The deleted segment will be deleted physically during the next load operation and only after the maximum query execution time configured using "max.query.execution.time". By default it is 60 minutes.
+* If the user wants to force delete the segment physically then he can use CLEAN FILES DML.
+Example:
+```ruby
+CLEAN FILES FOR TABLE table1
+```
+This DML will physically delete the segment which are "Marked for delete" immediately.
+
+
+
+***
+
+# Compacting Data
+### Scenario
+Frequent data ingestion results in several fragmented carbon files in the store directory. Since data is sorted only within each load, the indices perform only within each load. This mean that there will be one index for each load and as number of data load increases, the number of indices also increases. As each index works only on one load, the performance of indices is reduced. Carbon provides provision for compacting the loads. Compaction process combines several segments into one large segment by merge sorting the data from across the segments.
+
+### Prerequisite
+
+ The data should be loaded multiple times.
+
+### Procedure
+
+There are two types of compaction Minor and Major compaction.
+Minor Compaction:
+In minor compaction the user can specify how many loads to be merged. Minor compaction triggers for every data load if the parameter carbon.enable.auto.load.merge is set. If any segments are available to be merged, then compaction will run parallel with data load.
+There are 2 levels in minor compaction.
+* Level 1: Merging of the segments which are not yet compacted.
+* Level 2: Merging of the compacted segments again to form a bigger segment.
+Major Compaction:
+In Major compaction, many segments can be merged into one big segment. User will specify the compaction size until which segments can be merged. Major compaction is usually done during the off-peak time.
+
+### Parameters of Compaction
+| Parameter | Default | Applicable | Description | 
+| --------- | --------| -----------|-------------|
+| carbon.compaction.level.threshold | 4,3 | Minor | This property is for minor compaction which decides how many segments to be merged.**Example**: if it is set as 2,3 then minor compaction will be triggered for every 2 segments. 3 is the number of level 1 compacted segment which is further compacted to new segment.
+Valid values are from 0-100. |
+| carbon.major.compaction.size | 1024 mb | Major | Major compaction size can be configured using this parameter. Sum of the segments which is below this threshold will be merged. |
+| carbon.numberof.preserve.segments | 0 | Minor/Major| If the user wants to preserve some number of segments from being compacted then he can set this property.**Example**:carbon.numberof.preserve.segments=2 then 2 latest segments will always be excluded from the compaction.No segments will be preserved by default. |
+| carbon.allowed.compaction.days | 0 | Minor/Major| Compaction will merge the segments which are loaded with in the specific number of days configured.**Example**: if the configuration is 2, then the segments which are loaded in the time frame of 2 days only will get merged. Segments which are loaded 2 days apart will not be merged.This is disabled by default. |
+| carbon.number.of.cores.while.compacting | 2 | Minor/Major| Number of cores which is used to write data during compaction. |

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/DDL-Operations-on-Carbon.md
----------------------------------------------------------------------
diff --git a/docs/DDL-Operations-on-Carbon.md b/docs/DDL-Operations-on-Carbon.md
new file mode 100644
index 0000000..8fed4b2
--- /dev/null
+++ b/docs/DDL-Operations-on-Carbon.md
@@ -0,0 +1,177 @@
+
+* [CREATE TABLE](#CREATE TABLE)
+* [SHOW TABLE](#SHOW TABLE)
+* [DROP TABLE](#DROP TABLE)
+* [COMPACTION](#COMPACTION)
+
+***
+
+
+# CREATE TABLE
+### Function
+This command can be used to create carbon table by specifying the list of fields along with the table properties.
+
+### Syntax
+
+  ```ruby
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
+               [(col_name data_type , ...)]               
+         STORED BY 'org.apache.carbondata.format'
+               [TBLPROPERTIES (property_name=property_value, ...)]
+               // All Carbon's additional table options will go into properties
+  ```
+     
+**Example:**
+
+  ```ruby
+  CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                  productNumber Int,
+                  productName String, 
+                  storeCity String, 
+                  storeProvince String, 
+                  productCategory String, 
+                  productBatch String,
+                  saleQuantity Int,
+                  revenue Int)       
+       STORED BY 'org.apache.carbondata.format' 
+       TBLPROPERTIES ('COLUMN_GROUPS'='(productName,productCategory)',
+                     'DICTIONARY_EXCLUDE'='productName',
+                     'DICTIONARY_INCLUDE'='productNumber')
+  ```
+
+### Parameter Description
+
+| Parameter | Description |
+| ------------- | -----|
+| db_name | Name of the Database. Database name should consist of Alphanumeric characters and underscore(_) special character. |
+| field_list | Comma separated List of fields with data type. The field names should consist of Alphanumeric characters and underscore(_) special character.|
+|table_name | The name of the table in Database. Table Name should consist of Alphanumeric characters and underscore(_) special character. |
+| STORED BY | "org.apache.carbondata.format", identifies and creates carbon table. |
+| TBLPROPERTIES | List of carbon table properties. |
+
+### Usage Guideline
+Following are the table properties usage.
+
+ - **Dictionary Encoding Configuration**
+
+   By Default dictionary encoding will be enabled for all String columns, and disabled for non-String columns. User can include and exclude columns for dictionary encoding.
+
+  ```ruby
+  TBLPROPERTIES ("DICTIONARY_EXCLUDE"="column1, column2") 
+  TBLPROPERTIES ("DICTIONARY_INCLUDE"="column1, column2") 
+  ```
+Here, DICTIONARY_EXCLUDE will exclude dictionary creation. This is applicable for high-cardinality columns and is a optional parameter. DICTIONARY_INCLUDE will generate dictionary for the columns specified in the list.
+
+ - **Row/Column Format Configuration**
+
+   Column groups with more than one column are stored in row format, instead of columnar format. By default, each column is a separate column group.
+
+  ```ruby
+  TBLPROPERTIES ("COLUMN_GROUPS"="(column1,column3),(Column4,Column5,Column6)") 
+  ```
+
+### Scenarios
+#### Create table by specifying schema
+
+ The create table command is same as the Hive DDL. The Carbon's extra configurations are given as table properties.
+
+  ```ruby
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+               [(col_name data_type , ...)]
+         STORED BY \u2018org.carbondata.hive.CarbonHanlder\u2019
+               [TBLPROPERTIES (property_name=property_value ,...)]             
+  ```
+***
+
+# SHOW TABLE
+### Function
+This command can be used to list all the tables in current database or all the tables of a specific database.
+
+### Syntax
+
+  ```ruby
+  SHOW TABLES [IN db_Name];
+  ```
+
+**Example:**
+
+  ```ruby
+  SHOW TABLES IN ProductSchema;
+  ```
+
+### Parameter Description
+| Parameter | Description |
+|-----------|-------------|
+| IN db_Name | Name of the database. Required only if tables of this specific database are to be listed. |
+
+### Usage Guideline
+IN db_Name is optional.
+
+### Scenarios
+NA
+
+***
+
+# DROP TABLE
+### Function
+This command can be used to delete the existing table.
+
+### Syntax
+
+  ```ruby
+  DROP TABLE [IF EXISTS] [db_name.]table_name;
+  ```
+
+**Example:**
+
+  ```ruby
+  DROP TABLE IF EXISTS productSchema.productSalesTable;
+  ```
+
+### Parameter Description
+| Parameter | Description |
+|-----------|-------------|
+| db_Name | Name of the database. If not specified, current database will be selected. |
+| table_name | Name of the table to be deleted. |
+
+### Usage Guideline
+In this command IF EXISTS and db_name are optional.
+
+### Scenarios
+NA
+
+***
+
+# COMPACTION
+### Function
+ This command will merge the specified number of segments into one segment. This will enhance the query performance of the table.
+
+### Syntax
+
+  ```ruby
+  ALTER TABLE [db_name.]table_name COMPACT 'MINOR/MAJOR'
+  ```
+
+**Example:**
+
+  ```ruby
+  ALTER TABLE carbontable COMPACT MINOR
+  ALTER TABLE carbontable COMPACT MAJOR
+  ```
+
+### Parameter Description
+
+| Parameter | Description |
+| ------------- | -----|
+| db_name | Database name, if it is not specified then it uses current database. |
+| table_name | The name of the table in provided database.|
+ 
+
+### Usage Guideline
+NA
+
+### Scenarios
+NA
+
+
+***
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/DML-Operations-on-Carbon.md
----------------------------------------------------------------------
diff --git a/docs/DML-Operations-on-Carbon.md b/docs/DML-Operations-on-Carbon.md
new file mode 100644
index 0000000..6ce1bea
--- /dev/null
+++ b/docs/DML-Operations-on-Carbon.md
@@ -0,0 +1,216 @@
+* [LOAD DATA](#LOAD DATA)
+* [SHOW LOADS](#SHOW LOADS)
+* [DELETE SEGMENT BY ID](#DELETE SEGMENT BY ID)
+* [DELETE SEGMENT BY DATE](#DELETE SEGMENT BY DATE)
+
+***
+
+# LOAD DATA
+### Function
+ This command loads the user data in raw format to the Carbon specific data format store, this way Carbon provides good performance while querying the data.
+
+### Syntax
+
+  ```ruby
+  LOAD DATA [LOCAL] INPATH 'folder_path' INTO TABLE [db_name.]table_name 
+              OPTIONS(property_name=property_value, ...)
+  ```
+
+**Example:**
+
+  ```ruby
+  LOAD DATA local inpath '/opt/rawdata/data.csv' INTO table carbontable
+                         options('DELIMITER'=',', 'QUOTECHAR'='"',
+                                 'FILEHEADER'='empno,empname,
+                                  designation,doj,workgroupcategory,
+                                  workgroupcategoryname,deptno,deptname,projectcode,
+                                  projectjoindate,projectenddate,attendance,utilization,salary',
+                                 'MULTILINE'='true', 'ESCAPECHAR'='\', 
+                                 'COMPLEX_DELIMITER_LEVEL_1'='$', 
+                                 'COMPLEX_DELIMITER_LEVEL_2'=':',
+                                 'LOCAL_DICTIONARY_PATH'='/opt/localdictionary/',
+                                 'DICTIONARY_FILE_EXTENSION'='.dictionary') 
+  ```
+
+### Parameter Description
+
+| Parameter | Description |
+| ------------- | -----|
+| folder_path | Path of raw csv data folder or file. |
+| db_name | Database name, if it is not specified then it uses current database. |
+| table_name | The name of the table in provided database.|
+ 
+
+### Usage Guideline
+Following are the options that can be used in load data:
+- **DELIMITER:** Delimiters and Quote Characters can be provided in the load command.
+    
+    ``` ruby
+    OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"') 
+    ```
+- **QUOTECHAR:** Delimiters and Quote Characters can be provided in the load command.
+
+    ```ruby
+    OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"') 
+    ```
+- **FILEHEADER:** Headers can be provided in the LOAD DATA command if headers are missing in the source files.
+
+    ```ruby
+    OPTIONS('FILEHEADER'='column1,column2') 
+    ```
+- **MULTILINE:** CSV with new line character in quotes.
+
+    ```ruby
+    OPTIONS('MULTILINE'='true') 
+    ```
+- **ESCAPECHAR:** Escape char can be provided if user want strict validation of escape character on CSV.
+
+    ```ruby
+    OPTIONS('ESCAPECHAR'='\') 
+    ```
+- **COMPLEX_DELIMITER_LEVEL_1:** Split the complex type data column in a row (eg., a$b$c --> Array = {a,b,c}).
+
+    ```ruby
+    OPTIONS('COMPLEX_DELIMITER_LEVEL_1'='$') 
+    ```
+- **COMPLEX_DELIMITER_LEVEL_2:** Split the complex type nested data column in a row. Applies level_1 delimiter & applies level_2 based on complex data type (eg., a:b$c:d --> Array> = {{a,b},{c,d}}).
+
+    ```ruby
+    OPTIONS('COMPLEX_DELIMITER_LEVEL_2'=':') 
+    ```
+- **LOCAL_DICTIONARY_PATH:** Local dictionary files path.
+
+    ```ruby
+    OPTIONS('LOCAL_DICTIONARY_PATH'='/opt/localdictionary/') 
+    ```
+- **DICTIONARY_FILE_EXTENSION:** local Dictionary file extension.
+
+    ```ruby
+    OPTIONS('DICTIONARY_FILE_EXTENSION'='.dictionary') 
+    ```
+
+### Scenarios
+
+#### Load from CSV files
+
+To load carbon table from CSV file, use the following syntax.
+
+  ```ruby
+  LOAD DATA [LOCAL] INPATH 'folder path' INTO TABLE tablename OPTIONS(property_name=property_value, ...)
+  ```
+
+ **Example:**
+  
+  ```ruby
+  LOAD DATA local inpath './src/test/resources/data.csv' INTO table carbontable 
+                      options('DELIMITER'=',', 'QUOTECHAR'='"', 
+                              'FILEHEADER'='empno,empname,designation,doj,
+                               workgroupcategory,workgroupcategoryname,
+                               deptno,deptname,projectcode,projectjoindate,
+                               projectenddate,attendance,utilization,salary', 
+                              'MULTILINE'='true', 'ESCAPECHAR'='\', 
+                              'COMPLEX_DELIMITER_LEVEL_1'='$', 'COMPLEX_DELIMITER_LEVEL_2'=':', 
+                              'LOCAL_DICTIONARY_PATH'='/opt/localdictionary/','DICTIONARY_FILE_EXTENSION'='.dictionary')
+  ```
+
+***
+
+# SHOW SEGMENTS
+### Function
+This command is to show the segments of carbon table to the user.
+
+### Syntax
+
+  ```ruby
+  SHOW SEGMENTS FOR TABLE [db_name.]table_name LIMIT number_of_segments;
+  ```
+
+**Example:**
+
+  ```ruby
+  SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 2;
+  ```
+
+### Parameter Description
+
+| Parameter | Description |
+| ------------- | -----|
+| db_name | Database name, if it is not specified then it uses current database. |
+| table_name | The name of the table in provided database.|
+| number_of_loads | limit the output to this number. |
+
+### Usage Guideline
+NA
+
+### Scenarios
+NA
+
+***
+
+# DELETE SEGMENT BY ID
+### Function
+
+This command is to delete segment by using the segment ID.
+
+### Syntax
+
+  ```ruby
+  DELETE SEGMENT segment_id1,segment_id2 FROM TABLE [db_name.]table_name;
+  ```
+
+**Example:**
+
+  ```ruby
+  DELETE LOAD 0 FROM TABLE CarbonDatabase.CarbonTable;
+  DELETE LOAD 0.1,5,8 FROM TABLE CarbonDatabase.CarbonTable;
+  Note: Here 0.1 is compacted segment sequence id.  
+  ```
+
+### Parameter Description
+
+| Parameter | Description |
+| ------------- | -----|
+| segment_id | Segment Id of the load. |
+| db_name | Database name, if it is not specified then it uses current database. |
+| table_name | The name of the table in provided database.|
+
+### Usage Guideline
+NA
+
+### Scenarios
+NA
+
+***
+
+# DELETE SEGMENT BY DATE
+### Function
+
+This command will allow to deletes the Carbon segment(s) from the store based on the date provided by the user in the DML command. The segment created before the particular date will be removed from the specific stores.
+
+### Syntax
+
+  ```ruby
+  DELETE SEGMENTS FROM TABLE [db_name.]table_name WHERE STARTTIME BEFORE [DATE_VALUE];
+  ```
+
+**Example:**
+
+  ```ruby
+  DELETE SEGMENTS FROM TABLE CarbonDatabase.CarbonTable WHERE STARTTIME BEFORE '2017-06-01 12:05:06';  
+  ```
+
+### Parameter Description
+
+| Parameter | Description |
+| ------------- | -----|
+| DATE_VALUE | Valid segement load start time value. All the segments before this specified date will be deleted. |
+| db_name | Database name, if it is not specified then it uses current database. |
+| table_name | The name of the table in provided database.|
+
+### Usage Guideline
+NA
+
+### Scenarios
+NA
+
+***
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/Installing-CarbonData-And-IDE-Configuartion.md
----------------------------------------------------------------------
diff --git a/docs/Installing-CarbonData-And-IDE-Configuartion.md b/docs/Installing-CarbonData-And-IDE-Configuartion.md
new file mode 100644
index 0000000..7ada9cc
--- /dev/null
+++ b/docs/Installing-CarbonData-And-IDE-Configuartion.md
@@ -0,0 +1,66 @@
+### Building CarbonData
+Prerequisites for building CarbonData:
+* Unix-like environment (Linux, Mac OS X)
+* git
+* Apache Maven (we recommend version 3.3 or later)
+* Java 7 or 8
+* Scala 2.10
+* Apache Thrift 0.9.3
+
+I. Clone CarbonData
+```
+$ git clone https://github.com/apache/incubator-carbondata.git
+```
+II. Build the project 
+* Build without test:
+```
+$ mvn -DskipTests clean package 
+```
+* Build along with test:
+```
+$ mvn clean package
+```
+* Build with different spark versions (Default it takes Spark 1.5.2 version)
+```
+$ mvn -Pspark-1.5.2 clean package
+            or
+$ mvn -Pspark-1.6.1 clean install
+```
+* Build along with integration test cases: (Note : It takes more time to build)
+```
+$ mvn -Pintegration-test clean package
+```
+
+### Developing CarbonData
+The CarbonData committers use IntelliJ IDEA and Eclipse IDE to develop.
+
+#### IntelliJ IDEA
+* Download IntelliJ at https://www.jetbrains.com/idea/ and install the Scala plug-in for IntelliJ at http://plugins.jetbrains.com/plugin/?id=1347
+* Go to "File -> Import Project", locate the CarbonData source directory, and select "Maven Project".
+* In the Import Wizard, select "Import Maven projects automatically" and leave other settings at their default. 
+* Leave other settings at their default and you should be able to start your development.
+* When you run the scala test, sometimes you will get out of memory exception. You can increase your VM memory usage by the following setting, for example:
+```
+-XX:MaxPermSize=512m -Xmx3072m
+```
+You can also make those setting to be the default by setting to the "Defaults -> ScalaTest".
+
+#### Eclipse
+* Download the Scala IDE (preferred) or install the scala plugin to Eclipse.
+* Import the CarbonData Maven projects ("File" -> "Import" -> "Maven" -> "Existing Maven Projects" -> locate the CarbonData source directory).
+
+### Getting Started
+Read the [quick start](https://github.com/HuaweiBigData/carbondata/wiki/Quick-Start).
+
+### Fork and Contribute
+This is an open source project for everyone, and we are always open to people who want to use this system or contribute to it. 
+This guide document introduce [how to contribute to CarbonData](https://github.com/HuaweiBigData/carbondata/wiki/How-to-contribute-and-Code-Style).
+
+### Contact us
+To get involved in CarbonData:
+
+* [Subscribe](mailto:dev-subscribe@carbondata.incubator.apache.org) then [mail](mailto:dev@carbondata.incubator.apache.org) to us
+* Report issues on [Jira](https://issues.apache.org/jira/browse/CARBONDATA).
+
+### About
+CarbonData project original contributed from the [Huawei](http://www.huawei.com)

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/images/format/carbon_data_file_structure_new.png
----------------------------------------------------------------------
diff --git a/docs/images/format/carbon_data_file_structure_new.png b/docs/images/format/carbon_data_file_structure_new.png
new file mode 100644
index 0000000..3f9241b
Binary files /dev/null and b/docs/images/format/carbon_data_file_structure_new.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/images/format/carbon_data_format_new.png
----------------------------------------------------------------------
diff --git a/docs/images/format/carbon_data_format_new.png b/docs/images/format/carbon_data_format_new.png
new file mode 100644
index 0000000..9d0b194
Binary files /dev/null and b/docs/images/format/carbon_data_format_new.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/images/format/carbon_data_full_scan.png
----------------------------------------------------------------------
diff --git a/docs/images/format/carbon_data_full_scan.png b/docs/images/format/carbon_data_full_scan.png
new file mode 100644
index 0000000..46715e7
Binary files /dev/null and b/docs/images/format/carbon_data_full_scan.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/images/format/carbon_data_motivation.png
----------------------------------------------------------------------
diff --git a/docs/images/format/carbon_data_motivation.png b/docs/images/format/carbon_data_motivation.png
new file mode 100644
index 0000000..6e454c6
Binary files /dev/null and b/docs/images/format/carbon_data_motivation.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/images/format/carbon_data_olap_scan.png
----------------------------------------------------------------------
diff --git a/docs/images/format/carbon_data_olap_scan.png b/docs/images/format/carbon_data_olap_scan.png
new file mode 100644
index 0000000..c1dfb18
Binary files /dev/null and b/docs/images/format/carbon_data_olap_scan.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/5d19fc08/docs/images/format/carbon_data_random_scan.png
----------------------------------------------------------------------
diff --git a/docs/images/format/carbon_data_random_scan.png b/docs/images/format/carbon_data_random_scan.png
new file mode 100644
index 0000000..7d44d34
Binary files /dev/null and b/docs/images/format/carbon_data_random_scan.png differ


[4/4] incubator-carbondata git commit: Added documentation of carbondata This closes #4

Posted by ch...@apache.org.
Added documentation of carbondata This closes #4


Project: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/commit/c49833c8
Tree: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/tree/c49833c8
Diff: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/diff/c49833c8

Branch: refs/heads/master
Commit: c49833c81ac4b1074587d8330db1de1b00d3c017
Parents: 42f6659 0bedd8b
Author: chenliang613 <ch...@apache.org>
Authored: Wed Jun 29 18:34:16 2016 +0530
Committer: chenliang613 <ch...@apache.org>
Committed: Wed Jun 29 18:34:16 2016 +0530

----------------------------------------------------------------------
 docs/Carbon-Interfaces.md                       |  72 +++++++
 docs/Carbondata-File-Structure-and-Format.md    |  36 ++++
 docs/Carbondata-Management.md                   | 144 +++++++++++++
 docs/DDL-Operations-on-Carbon.md                | 177 +++++++++++++++
 docs/DML-Operations-on-Carbon.md                | 216 +++++++++++++++++++
 ...stalling-CarbonData-And-IDE-Configuartion.md |  66 ++++++
 .../format/carbon_data_file_structure_new.png   | Bin 0 -> 78374 bytes
 docs/images/format/carbon_data_format_new.png   | Bin 0 -> 73708 bytes
 docs/images/format/carbon_data_full_scan.png    | Bin 0 -> 35710 bytes
 docs/images/format/carbon_data_motivation.png   | Bin 0 -> 25388 bytes
 docs/images/format/carbon_data_olap_scan.png    | Bin 0 -> 45235 bytes
 docs/images/format/carbon_data_random_scan.png  | Bin 0 -> 46317 bytes
 12 files changed, 711 insertions(+)
----------------------------------------------------------------------



[3/4] incubator-carbondata git commit: corrected image path

Posted by ch...@apache.org.
corrected image path


Project: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/commit/0bedd8b8
Tree: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/tree/0bedd8b8
Diff: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/diff/0bedd8b8

Branch: refs/heads/master
Commit: 0bedd8b85dbda60ba784ebb0150c88eb8da3fb4b
Parents: 0963f83
Author: ravipesala <ra...@gmail.com>
Authored: Wed Jun 29 18:23:06 2016 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Wed Jun 29 18:23:06 2016 +0530

----------------------------------------------------------------------
 docs/Carbondata-File-Structure-and-Format.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/0bedd8b8/docs/Carbondata-File-Structure-and-Format.md
----------------------------------------------------------------------
diff --git a/docs/Carbondata-File-Structure-and-Format.md b/docs/Carbondata-File-Structure-and-Format.md
index d1d4971..7b73297 100644
--- a/docs/Carbondata-File-Structure-and-Format.md
+++ b/docs/Carbondata-File-Structure-and-Format.md
@@ -6,7 +6,7 @@ Such queries select only a few columns with a group by clause but do not contain
 ![Full Scan Query](/docs/images/format/carbon_data_full_scan.png?raw=true)
 
 ### OLAP Style Query / Multi-dimensional Analysis
-These are queries which are typically fired from Interactive Analysis tools. Such queries often select a few columns but involve filters and group by on a column or a grouping expression. 
+These are queries which are typically fired from Interactive Analysis tools. Such queries often select a few columns but involve filters and group by on a column or a grouping expression.  
 ![OLAP Scan Query](/docs/images/format/carbon_data_olap_scan.png?raw=true)
 
 


[2/4] incubator-carbondata git commit: Corrected image path

Posted by ch...@apache.org.
Corrected image path


Project: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/commit/0963f838
Tree: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/tree/0963f838
Diff: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/diff/0963f838

Branch: refs/heads/master
Commit: 0963f8389feed68302d5e70acbd9b7ce07ef9be7
Parents: 5d19fc0
Author: ravipesala <ra...@gmail.com>
Authored: Wed Jun 29 18:20:44 2016 +0530
Committer: ravipesala <ra...@gmail.com>
Committed: Wed Jun 29 18:20:44 2016 +0530

----------------------------------------------------------------------
 docs/Carbondata-File-Structure-and-Format.md | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/0963f838/docs/Carbondata-File-Structure-and-Format.md
----------------------------------------------------------------------
diff --git a/docs/Carbondata-File-Structure-and-Format.md b/docs/Carbondata-File-Structure-and-Format.md
index e5d97b8..d1d4971 100644
--- a/docs/Carbondata-File-Structure-and-Format.md
+++ b/docs/Carbondata-File-Structure-and-Format.md
@@ -3,20 +3,20 @@ The motivation behind CarbonData is to create a single file format for all kind
 
 ### Sequential Access / Big Scan
 Such queries select only a few columns with a group by clause but do not contain any filters. This results in full scan over the complete store for the selected columns.  
-[[/images/format/carbon_data_full_scan.png|Full Scan Query]]
+![Full Scan Query](/docs/images/format/carbon_data_full_scan.png?raw=true)
 
 ### OLAP Style Query / Multi-dimensional Analysis
 These are queries which are typically fired from Interactive Analysis tools. Such queries often select a few columns but involve filters and group by on a column or a grouping expression. 
-[[/images/format/carbon_data_olap_scan.png|OLAP Scan Query]]
+![OLAP Scan Query](/docs/images/format/carbon_data_olap_scan.png?raw=true)
 
 
 ### Random Access / Narrow Scan
 These are queries used from operational applications and usually select all or most of the columns but do involve a large number of filters which reduce the result to a small size. Such queries generally do not involve any aggregation or group by clause.  
-[[/images/format/carbon_data_random_scan.png|Random Scan Query]]
+![Random Scan Query](/docs/images/format/carbon_data_random_scan.png?raw=true)
 
 ### Single Format to provide low latency response for all usecases
 The main motivation behind CarbonData is to provide a single storage format for all the usecases of querying big data on Hadoop. Thus CarbonData is able to cover all use-cases into a single storage format.
-[[/images/format/carbon_data_motivation.png|Motivation]]
+![Motivation](/docs/images/format/carbon_data_motivation.png?raw=true)
 
 
 ## CarbonData File Structure
@@ -26,11 +26,11 @@ The file footer can be read once to build the indices in memory, which can be ut
 
 Each blocklet in the file is further divided into chunks of data called Data Chunks. Each data chunk is organized either in columnar format or row format, and stores the data of either a single column or a set of columns. All blocklets in one file contain the same number and type of Data Chunks.
 
-[[/images/format/carbon_data_file_structure_new.png|Carbon File Structure]]
+![Carbon File Structure](/docs/images/format/carbon_data_file_structure_new.png?raw=true)
 
 Each Data Chunk contains multiple groups of data called as Pages. There are three types of pages.
 * Data Page: Contains the encoded data of a column/group of columns.
 * Row ID Page (optional): Contains the row id mappings used when the Data Page is stored as an inverted index.
 * RLE Page (optional): Contains additional metadata used when the Data Page in RLE coded.
 
-[[/images/format/carbon_data_format_new.png|Carbon File Format]]
+![Carbon File Format](/docs/images/format/carbon_data_format_new.png?raw=true)