You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@carbondata.apache.org by ch...@apache.org on 2017/11/28 15:43:13 UTC

carbondata git commit: [CARBONDATA-1821] Updated md files for correcting heading

Repository: carbondata
Updated Branches:
  refs/heads/master 0f407de8c -> 445615fea


[CARBONDATA-1821] Updated md files for correcting heading

Updated md files for correcting heading

This closes #1582


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/445615fe
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/445615fe
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/445615fe

Branch: refs/heads/master
Commit: 445615fea976766a529c9c21a7ed805cac5b9d1b
Parents: 0f407de
Author: vandana <va...@gmail.com>
Authored: Tue Nov 28 11:19:35 2017 +0530
Committer: chenliang613 <ch...@huawei.com>
Committed: Tue Nov 28 23:42:58 2017 +0800

----------------------------------------------------------------------
 docs/data-management-on-carbondata.md | 27 +++++++++++++++------------
 docs/release-guide.md                 | 19 +++++++++----------
 docs/troubleshooting.md               |  2 +-
 3 files changed, 25 insertions(+), 23 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/445615fe/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md b/docs/data-management-on-carbondata.md
index 1a3d0a8..28e9fbb 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -90,10 +90,11 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
      ```
      TBLPROPERTIES ('TABLE_BLOCKSIZE'='512')
      ```
-     Note: 512 or 512M both are accepted.
+     NOTE: 512 or 512M both are accepted.
 
 ### Example:
-    ```
+
+   ```
     CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
                                    productNumber Int,
                                    productName String,
@@ -109,7 +110,7 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
                    'SORT_COLUMNS'='productName,storeCity',
                    'SORT_SCOPE'='NO_SORT',
                    'TABLE_BLOCKSIZE'='512')
-    ```
+   ```
         
 ## TABLE MANAGEMENT  
 
@@ -194,7 +195,7 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
      Valid Scenarios
      - Invalid scenario - Change of decimal precision from (10,2) to (10,5) is invalid as in this case only scale is increased but total number of digits remains the same.
      - Valid scenario - Change of decimal precision from (10,2) to (12,3) is valid as the total number of digits are increased by 2 but scale is increased only by 1 which will not lead to any data loss.
-     - Note :The allowed range is 38,38 (precision, scale) and is a valid upper case scenario which is not resulting in data loss.
+     - NOTE: The allowed range is 38,38 (precision, scale) and is a valid upper case scenario which is not resulting in data loss.
 
      Example1:Changing data type of column a1 from INT to BIGINT.
      ```
@@ -298,23 +299,25 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
 
     ```
     OPTIONS('DATEFORMAT' = 'yyyy-MM-dd','TIMESTAMPFORMAT'='yyyy-MM-dd HH:mm:ss')
-     ```
+    ```
     NOTE: Date formats are specified by date pattern strings. The date pattern letters in CarbonData are same as in JAVA. Refer to [SimpleDateFormat](http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html).
 
   - **SINGLE_PASS:** Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.
 
-   This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE.
+  This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE.
 
-    ```
+   ```
     OPTIONS('SINGLE_PASS'='TRUE')
-    ```
-   Note :
+   ```
+
+   NOTE:
    * If this option is set to TRUE then data loading will take less time.
    * If this option is set to some invalid value other than TRUE or FALSE then it uses the default value.
    * If this option is set to TRUE, then high.cardinality.identify.enable property will be disabled during data load.
    * For first Load SINGLE_PASS loading option is disabled.
 
    Example:
+
    ```
    LOAD DATA local inpath '/opt/rawdata/data.csv' INTO table carbontable
    options('DELIMITER'=',', 'QUOTECHAR'='"','COMMENTCHAR'='#',
@@ -346,6 +349,7 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
   * The maximum number of characters per column is 100000. If there are more than 100000 characters in a column, data loading will fail.
 
   Example:
+
   ```
   LOAD DATA INPATH 'filepath.csv' INTO TABLE tablename
   OPTIONS('BAD_RECORDS_LOGGER_ENABLE'='true','BAD_RECORD_PATH'='hdfs://hacluster/tmp/carbon',
@@ -569,7 +573,7 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
   [TBLPROPERTIES ('PARTITION_TYPE'='LIST',
                   'LIST_INFO'='A, B, C, ...')]
   ```
-  NOTE : List partition supports list info in one level group.
+  NOTE: List partition supports list info in one level group.
 
   Example:
   ```
@@ -608,8 +612,7 @@ This tutorial is going to introduce all commands and data operations on CarbonDa
 
 ### Drop a partition
 
-    Only drop partition definition, but keep data
-
+   Only drop partition definition, but keep data
   ```
     ALTER TABLE [db_name].table_name DROP PARTITION(partition_id)
    ```

http://git-wip-us.apache.org/repos/asf/carbondata/blob/445615fe/docs/release-guide.md
----------------------------------------------------------------------
diff --git a/docs/release-guide.md b/docs/release-guide.md
index c63bc1b..e626ccb 100644
--- a/docs/release-guide.md
+++ b/docs/release-guide.md
@@ -216,8 +216,7 @@ x.x.x release".
 
 Copy the source release to dev repository on `dist.apache.org`.
 
-1. If you have not already, check out the section of the `dev` repository on `dist
-.apache.org` via Subversion. In a fresh directory:
+1. If you have not already, check out the section of the `dev` repository on `dist.apache.org` via Subversion. In a fresh directory:
 
 ```
 svn co https://dist.apache.org/repos/dist/dev/carbondata
@@ -244,7 +243,7 @@ svn commit
 
 5. Verify the files are [present](https://dist.apache.org/repos/dist/dev/carbondata).
 
-### Propose a pull request for website updates
+### Propose a pull request for website updates
 
 The final step of building a release candidate is to propose a website pull request.
 
@@ -338,7 +337,7 @@ _Checklist to proceed to the final step:_
 1. Community votes to release the proposed release
 2. While in incubation, Apache Incubator PMC votes to release the proposed release
 
-## Cancel a Release (Fix Issues)
+## Cancel a Release (Fix Issues)
 
 Any issue identified during the community review and vote should be fixed in this step.
 
@@ -367,23 +366,23 @@ Code changes should be proposed as standard pull requests and merged.
 Once all issues have been resolved, you should go back and build a new release candidate with 
 these changes.
 
-## Finalize the release
+## Finalize the release
 
 Once the release candidate has been reviewed and approved by the community, the release should be
  finalized. This involves the final deployment of the release to the release repositories, 
  merging the website changes, and announce the release.
  
-### Deploy artifacts to Maven Central repository
+### Deploy artifacts to Maven Central repository
 
 On Nexus, release the staged artifacts to Maven Central repository. In the `Staging Repositories`
  section, find the relevant release candidate `orgapachecarbondata-XXX` entry and click `Release`.
 
-### Deploy source release to dist.apache.org
+### Deploy source release to dist.apache.org
 
 Copy the source release from the `dev` repository to `release` repository at `dist.apache.org` 
 using Subversion.
 
-### Merge website pull request
+### Merge website pull request
 
 Merge the website pull request to list the release created earlier.
 
@@ -402,12 +401,12 @@ _Checklist to proceed to the next step:_
 3. Website pull request to list the release merged
 4. Release version finalized in Jira
 
-## Promote the release
+## Promote the release
 
 Once the release has been finalized, the last step of the process is to promote the release 
 within the project and beyond.
 
-### Apache mailing lists
+### Apache mailing lists
 
 Announce on the dev@ mailing list that the release has been finished.
  

http://git-wip-us.apache.org/repos/asf/carbondata/blob/445615fe/docs/troubleshooting.md
----------------------------------------------------------------------
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index 7d66ee0..68dd538 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -32,7 +32,7 @@ java.io.FileNotFoundException: hdfs:/localhost:9000/carbon/store/default/hdfstab
 ```
 
   **Possible Cause**
-  If you use <hdfs path> as store path when creating carbonsession, may get the errors,because the default is LOCALLOCK.
+  If you use `<hdfs path>` as store path when creating carbonsession, may get the errors,because the default is LOCALLOCK.
 
   **Procedure**
   Before creating carbonsession, sets as below: