You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@carbondata.apache.org by qi...@apache.org on 2020/04/26 12:21:09 UTC

[carbondata] branch master updated: [CARBONDATA-3775] Update materialized view document

This is an automated email from the ASF dual-hosted git repository.

qiangcai pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
     new 2b77065  [CARBONDATA-3775] Update materialized view document
2b77065 is described below

commit 2b77065170b66923539f28927f0981027ea740a2
Author: liuzhi <37...@qq.com>
AuthorDate: Sat Apr 18 16:14:12 2020 +0800

    [CARBONDATA-3775] Update materialized view document
    
    Why is this PR needed?
    Update materialized view document synchronously, after the materialized view is separated from the DataMap module.
    
    What changes were proposed in this PR?
    Update materialized view syntax comment.
    Add comment about usage of time series.
    Move document to document root directory from index directory.
    
    This closes #3720
---
 README.md               |   4 +-
 docs/index/mv-guide.md  | 271 --------------------------------------
 docs/language-manual.md |   2 +-
 docs/mv-guide.md        | 343 ++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 346 insertions(+), 274 deletions(-)

diff --git a/README.md b/README.md
index b1a712c..5cd27b5 100644
--- a/README.md
+++ b/README.md
@@ -57,8 +57,8 @@ CarbonData is built using Apache Maven, to [build CarbonData](https://github.com
  * [Data Types](https://github.com/apache/carbondata/blob/master/docs/supported-data-types-in-carbondata.md) 
 * [CarbonData Index Management](https://github.com/apache/carbondata/blob/master/docs/index/index-management.md) 
  * [CarbonData BloomFilter Index](https://github.com/apache/carbondata/blob/master/docs/index/bloomfilter-index-guide.md) 
- * [CarbonData Lucene Index](https://github.com/apache/carbondata/blob/master/docs/index/lucene-index-guide.md) 
- * [CarbonData MV DataMap](https://github.com/apache/carbondata/blob/master/docs/datamap/mv-datamap-guide.md)
+ * [CarbonData Lucene Index](https://github.com/apache/carbondata/blob/master/docs/index/lucene-index-guide.md)
+ * [CarbonData MV](https://github.com/apache/carbondata/blob/master/docs/mv-guide.md)
 * [Carbondata Secondary Index](https://github.com/apache/carbondata/blob/master/docs/index/secondary-index-guide.md)
 * [SDK Guide](https://github.com/apache/carbondata/blob/master/docs/sdk-guide.md) 
 * [C++ SDK Guide](https://github.com/apache/carbondata/blob/master/docs/csdk-guide.md)
diff --git a/docs/index/mv-guide.md b/docs/index/mv-guide.md
deleted file mode 100644
index b071967..0000000
--- a/docs/index/mv-guide.md
+++ /dev/null
@@ -1,271 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to you under the Apache License, Version 2.0
-    (the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
--->
-
-# CarbonData MV DataMap
-
-* [Quick Example](#quick-example)
-* [MV DataMap](#mv-datamap-introduction)
-* [Loading Data](#loading-data)
-* [Querying Data](#querying-data)
-* [Compaction](#compacting-mv-datamap)
-* [Data Management](#data-management-with-mv-tables)
-* [MV TimeSeries Support](#mv-timeseries-support)
-* [MV TimeSeries RollUp Support](#mv-timeseries-rollup-support)
-
-## Quick example
-
-Start spark-sql in terminal and run the following queries,
-```
-CREATE TABLE maintable(a int, b string, c int) stored as carbondata;
-insert into maintable select 1, 'ab', 2;
-CREATE DATAMAP datamap_1 on table maintable as SELECT a, sum(b) from maintable group by a;
-SELECT a, sum(b) from maintable group by a;
-// NOTE: run explain query and check if query hits the datamap table from the plan
-EXPLAIN SELECT a, sum(b) from maintable group by a;
-```
-
-## MV DataMap Introduction
-  MV tables are created as DataMaps and managed as tables internally by CarbonData. User can create
-  limitless MV datamaps on a table to improve query performance provided the storage requirements
-  and loading time is acceptable.
-
-  MV datamap can be a lazy or a non-lazy datamap. Once MV datamaps are created, CarbonData's
-  CarbonAnalyzer helps to select the most efficient MV datamap based on the user query and rewrite
-  the SQL to select the data from MV datamap instead of main table. Since the data size of MV
-  datamap is smaller and data is pre-processed, user queries are much faster.
-
-  For instance, main table called **sales** which is defined as
-
-  ```
-  CREATE TABLE sales (
-    order_time timestamp,
-    user_id string,
-    sex string,
-    country string,
-    quantity int,
-    price bigint)
-  STORED AS carbondata
-  ```
-
-  User can create MV tables using the Create DataMap DDL
-
-  ```
-  CREATE DATAMAP agg_sales
-  ON TABLE sales
-  USING "MV"
-  DMPROPERTIES('TABLE_BLOCKSIZE'='256 MB','LOCAL_DICTIONARY_ENABLE'='false')
-  AS
-    SELECT country, sex, sum(quantity), avg(price)
-    FROM sales
-    GROUP BY country, sex
-  ```
- **NOTE**:
- * Group by and Order by columns has to be provided in projection list while creating mv datamap
- * If only single parent table is involved in mv datamap creation, then TableProperties of Parent table
-   (if not present in a aggregate function like sum(col)) listed below will be
-   inherited to datamap table
-    1. SORT_COLUMNS
-    2. SORT_SCOPE
-    3. TABLE_BLOCKSIZE
-    4. FLAT_FOLDER
-    5. LONG_STRING_COLUMNS
-    6. LOCAL_DICTIONARY_ENABLE
-    7. LOCAL_DICTIONARY_THRESHOLD
-    8. LOCAL_DICTIONARY_EXCLUDE
-    9. INVERTED_INDEX
-   10. NO_INVERTED_INDEX
-   11. COLUMN_COMPRESSOR
-
- * Creating MV datamap with select query containing only project of all columns of maintable is unsupported 
-      
-   **Example:**
-   If table 'x' contains columns 'a,b,c',
-   then creating MV datamap with below queries is not supported.
-   
-   1. ```select a,b,c from x```
-   2. ```select * from x```
- * TableProperties can be provided in DMProperties excluding LOCAL_DICTIONARY_INCLUDE,
-   LOCAL_DICTIONARY_EXCLUDE, INVERTED_INDEX,
-   NO_INVERTED_INDEX, SORT_COLUMNS, LONG_STRING_COLUMNS, RANGE_COLUMN & COLUMN_META_CACHE
- * TableProperty given in DMProperties will be considered for mv creation, eventhough if same
-   property is inherited from parent table, which allows user to provide different tableproperties
-   for child table
- * MV creation with limit or union all ctas queries is unsupported
- * MV does not support Streaming
-
-#### How MV tables are selected
-
-When a user query is submitted, during query planning phase, CarbonData will collect modular plan
-candidates and process the the ModularPlan based on registered summary data sets. Then,
-mv datamap table for this query will be selected among the candidates.
-
-For the main table **sales** and mv table  **agg_sales** created above, following queries
-```
-SELECT country, sex, sum(quantity), avg(price) from sales GROUP BY country, sex
-
-SELECT sex, sum(quantity) from sales GROUP BY sex
-
-SELECT avg(price), country from sales GROUP BY country
-```
-
-will be transformed by CarbonData's query planner to query against mv table
-**agg_sales** instead of the main table **sales**
-
-However, for following queries
-```
-SELECT user_id, country, sex, sum(quantity), avg(price) from sales GROUP BY user_id, country, sex
-
-SELECT sex, avg(quantity) from sales GROUP BY sex
-
-SELECT country, max(price) from sales GROUP BY country
-```
-
-will query against main table **sales** only, because it does not satisfy mv table
-selection logic.
-
-## Loading data
-
-### Loading data to Non-Lazy MV Datamap
-
-In case of WITHOUT DEFERRED REBUILD, for existing table with loaded data, data load to MV table will
-be triggered by the CREATE DATAMAP statement when user creates the MV table.
-For incremental loads to main table, data to datamap will be loaded once the corresponding main
-table load is completed.
-
-### Loading data to Lazy MV Datamap
-
-In case of WITH DEFERRED REBUILD, data load to MV table will be triggered by the [Manual Refresh](./datamap-management.md#manual-refresh)
-command. MV datamap will be in DISABLED state in below scenarios,
-  * when mv datamap is created
-  * when data of main table and datamap are not in sync
-
-User should fire REBUILD DATAMAP command to sync all segments of main table with datamap table and
-which ENABLES the datamap for query
-
-### Loading data to Multiple MV's
-During load to main table, if anyone of the load to datamap table fails, then that corresponding
-datamap will be DISABLED and load to other datamaps mapped to main table will continue. User can
-fire REBUILD DATAMAP command to sync or else the subsequent table load will load the old failed
-loads along with current load and enable the disabled datamap.
-
- **NOTE**:
- * In case of InsertOverwrite/Update operation on parent table, all segments of datamap table will
-   be MARKED_FOR_DELETE and reload to datamap table will happen by REBUILD DATAMAP, in case of Lazy
-   mv datamap/ once InsertOverwrite/Update operation on parent table is finished, in case of
-   Non-Lazy mv.
- * In case of full scan query, Data Size and Index Size of main table and child table will not the
-   same, as main table and child table has different column names.
-
-## Querying data
-As a technique for query acceleration, MV tables cannot be queried directly.
-Queries are to be made on main table. While doing query planning, internally CarbonData will check
-associated mv datamap tables with the main table, and do query plan transformation accordingly.
-
-User can verify whether a query can leverage mv datamap table or not by executing `EXPLAIN`
-command, which will show the transformed logical plan, and thus user can check whether mv datamap
-table is selected.
-
-
-## Compacting MV datamap
-
-### Compacting MV datamap table through Main Table compaction
-Running Compaction command (`ALTER TABLE COMPACT`)[COMPACTION TYPE-> MINOR/MAJOR] on main table will
-automatically compact the mv datamap tables created on the main table, once compaction on main table
-is done.
-
-### Compacting MV datamap table through DDL command
-Compaction on mv datamap can be triggered by running the following DDL command(supported only for mv).
-  ```
-  ALTER DATAMAP datamap_name COMPACT 'COMPACTION_TYPE'
-  ```
-
-## Data Management with mv tables
-In current implementation, data consistency needs to be maintained for both main table and mv datamap
-tables. Once there is mv datamap table created on the main table, following command on the main
-table is not supported:
-1. Data management command: `DELETE SEGMENT`.
-2. Schema management command: `ALTER TABLE DROP COLUMN`, `ALTER TABLE CHANGE DATATYPE`,
-   `ALTER TABLE RENAME`, `ALTER COLUMN RENAME`. Note that adding a new column is supported, and for
-   dropping columns and change datatype command, CarbonData will check whether it will impact the
-   mv datamap table, if not, the operation is allowed, otherwise operation will be rejected by
-   throwing exception.
-3. Partition management command: `ALTER TABLE ADD/DROP PARTITION`. Note that dropping a partition
-   will be allowed only if partition is participating in all datamaps associated with main table.
-   Drop Partition is not allowed, if any mv datamap is associated with more than one parent table.
-   Drop Partition directly on datamap table is not allowed.
-4. Complex Datatype's for mv datamap is not supported.
-
-However, there is still way to support these operations on main table, in current CarbonData
-release, user can do as following:
-1. Remove the mv datamap table by `DROP DATAMAP` command
-2. Carry out the data management operation on main table
-3. Create the mv datamap table again by `CREATE DATAMAP` command
-Basically, user can manually trigger the operation by re-building the datamap.
-
-## MV TimeSeries Support
-MV non-lazy datamap supports TimeSeries queries. Supported granularity strings are: year, month, week, day,
-hour,thirty_minute, fifteen_minute, ten_minute, five_minute, minute and second.
-
- User can create MV datamap with timeseries queries like the below example:
-
-  ```
-  CREATE DATAMAP agg_sales
-  ON TABLE sales
-  USING "MV"
-  AS
-    SELECT timeseries(order_time,'second'),avg(price)
-    FROM sales
-    GROUP BY timeseries(order_time,'second')
-  ```
-Supported columns that can be provided in timeseries udf should be of TimeStamp/Date type.
-Timeseries queries with Date type support's only year, month, day and week granularities.
-
- **NOTE**:
- 1. Single select statement cannot contain timeseries udf(s) neither with different granularity nor
- with different timestamp/date columns.
- 
- ## MV TimeSeries RollUp Support
-  MV Timeseries queries can be rolledUp from existing mv datamaps.
-  ### Query RollUp
- Consider an example where the query is on hour level granularity, but the datamap
- of hour is not present but  minute level datamap is present, then we can get the data
- from minute level and the aggregate the hour level data and give output.
- This is called query rollup.
- 
- Consider if user create's below timeseries datamap,
-   ```
-   CREATE DATAMAP agg_sales
-   ON TABLE sales
-   USING "MV"
-   AS
-     SELECT timeseries(order_time,'minute'),avg(price)
-     FROM sales
-     GROUP BY timeseries(order_time,'minute')
-   ```
- and fires the below query with hour level granularity.
-   ```
-    SELECT timeseries(order_time,'hour'),avg(price)
-    FROM sales
-    GROUP BY timeseries(order_time,'hour')
-   ```
- Then, the above query can be rolled up from 'agg_sales' mv datamap, by adding hour
- level timeseries aggregation on minute level datamap. Users can fire explain command
- to check if query is rolled up from existing mv datamaps.
- 
-  **NOTE**:
-  1. Queries cannot be rolled up, if filter contains timeseries function.
-  2. RollUp is not yet supported for queries having join clause or order by functions.
\ No newline at end of file
diff --git a/docs/language-manual.md b/docs/language-manual.md
index 9a4a79b..f533e2c 100644
--- a/docs/language-manual.md
+++ b/docs/language-manual.md
@@ -28,7 +28,7 @@ CarbonData has its own parser, in addition to Spark's SQL Parser, to parse and p
     - [Bloom](./index/bloomfilter-index-guide.md)
     - [Lucene](./index/lucene-index-guide.md)
     - [Secondary-index](./index/secondary-index-guide.md)
-  - [Materialized Views (MV)](./index/mv-guide.md)
+  - [Materialized Views](./mv-guide.md)
   - [Streaming](./streaming-guide.md)
 - Data Manipulation Statements
   - [DML:](./dml-of-carbondata.md) [Load](./dml-of-carbondata.md#load-data), [Insert](./dml-of-carbondata.md#insert-data-into-carbondata-table), [Update](./dml-of-carbondata.md#update), [Delete](./dml-of-carbondata.md#delete)
diff --git a/docs/mv-guide.md b/docs/mv-guide.md
new file mode 100644
index 0000000..9902e1c
--- /dev/null
+++ b/docs/mv-guide.md
@@ -0,0 +1,343 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to you under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+
+# CarbonData Materialized View
+
+* [Quick Example](#quick-example)
+* [Introduction](#introduction)
+* [Loading Data](#loading-data)
+* [Querying Data](#querying-data)
+* [Compaction](#compacting)
+* [Data Management](#data-management)
+* [Time Series Support](#time-series-support)
+* [Time Series RollUp Support](#time-series-rollup-support)
+
+## Quick example
+
+ Start spark-sql in terminal and run the following queries,
+
+   ```
+     CREATE TABLE maintable(a int, b string, c int) stored as carbondata;
+     INSERT INTO maintable SELECT 1, 'ab', 2;
+     CREATE MATERIALIZED VIEW view1 AS SELECT a, sum(b) FROM maintable GROUP BY a;
+     SELECT a, sum(b) FROM maintable GROUP BY a;
+     // NOTE: run explain query and check if query hits the Index table from the plan
+     EXPLAIN SELECT a, sum(b) FROM maintable GROUP BY a;
+   ```
+
+## Introductions
+
+ Materialized views are created as tables from queries. User can create limitless materialized view 
+ to improve query performance provided the storage requirements and loading time is acceptable.
+ 
+ Materialized view can be refreshed on commit or on manual. Once materialized views are created, 
+ CarbonData's MVRewriteRule helps to select the most efficient materialized view based on 
+ the user query and rewrite the SQL to select the data from materialized view instead of 
+ fact tables. Since the data size of materialized view is smaller and data is pre-processed, 
+ user queries are much faster.
+ 
+ For instance, fact table called **sales** which is defined as.
+ 
+   ```
+     CREATE TABLE sales (
+       order_time timestamp,
+       user_id string,
+       sex string,
+       country string,
+       quantity int,
+       price bigint)
+     STORED AS carbondata
+   ```
+
+ User can create materialized view using the CREATE MATERIALIZED VIEW statement.
+ 
+   ```
+     CREATE MATERIALIZED VIEW agg_sales
+     PROPERTIES('TABLE_BLOCKSIZE'='256 MB','LOCAL_DICTIONARY_ENABLE'='false')
+     AS
+       SELECT country, sex, sum(quantity), avg(price)
+       FROM sales
+       GROUP BY country, sex
+   ```
+
+ **NOTE**:
+   * Group by and Order by columns has to be provided in projection list while creating materialized view.
+   * If only single fact table is involved in materialized view creation, then TableProperties of 
+     fact table (if not present in a aggregate function like sum(col)) listed below will be 
+     inherited to materialized view.
+       1. SORT_COLUMNS
+       2. SORT_SCOPE
+       3. TABLE_BLOCKSIZE
+       4. FLAT_FOLDER
+       5. LONG_STRING_COLUMNS
+       6. LOCAL_DICTIONARY_ENABLE
+       7. LOCAL_DICTIONARY_THRESHOLD
+       8. LOCAL_DICTIONARY_EXCLUDE
+       9. INVERTED_INDEX
+       10. NO_INVERTED_INDEX
+       11. COLUMN_COMPRESSOR
+   * Creating materialized view with select query containing only project of all columns of fact 
+     table is unsupported.
+     **Example:**
+       If table 'x' contains columns 'a,b,c', then creating MV Index with below queries is not supported.
+         1. ```SELECT a,b,c FROM x```
+         2. ```SELECT * FROM x```
+   * TableProperties can be provided in Properties excluding LOCAL_DICTIONARY_INCLUDE,
+     LOCAL_DICTIONARY_EXCLUDE, INVERTED_INDEX, NO_INVERTED_INDEX, SORT_COLUMNS, LONG_STRING_COLUMNS, 
+     RANGE_COLUMN & COLUMN_META_CACHE.
+   * TableProperty given in Properties will be considered for materialized view creation, even though 
+     if same property is inherited from fact table, which allows user to provide different table 
+     properties for materialized view.
+   * Materialized view creation with limit or union all CTAS queries is unsupported.
+   * Materialized view does not support streaming.
+
+#### How materialized views are selected
+
+ When a user query is submitted, during query planning phase, CarbonData will collect modular plan
+ candidates and process the the ModularPlan based on registered summary data sets. Then,
+ materialized view for this query will be selected among the candidates.
+
+ For the fact table **sales** and materialized view **agg_sales** created above, following queries
+   ```
+     SELECT country, sex, sum(quantity), avg(price) FROM sales GROUP BY country, sex
+     SELECT sex, sum(quantity) FROM sales GROUP BY sex
+     SELECT avg(price), country FROM sales GROUP BY country
+   ```
+
+ will be transformed by CarbonData's query planner to query against materialized view **agg_sales** 
+ instead of the fact table **sales**.
+ 
+ However, for following queries
+
+   ```
+     SELECT user_id, country, sex, sum(quantity), avg(price) FROM sales GROUP BY user_id, country, sex
+     SELECT sex, avg(quantity) FROM sales GROUP BY sex
+     SELECT country, max(price) FROM sales GROUP BY country
+   ```
+
+ will query against fact table **sales** only, because it does not satisfy materialized view
+ selection logic.
+
+## Loading data
+
+### Loading data on commit
+
+ In case of WITHOUT DEFERRED REFRESH, for existing table with loaded data, data load to materialized 
+ view will be triggered by the CREATE MATERIALIZED VIEW statement when user creates the materialized 
+ view.
+
+ For incremental loads to fact table, data to materialized view will be loaded once the 
+ corresponding fact table load is completed.
+
+### Loading data on manual
+
+ In case of WITH DEFERRED REFRESH, data load to materialized view will be triggered by the refresh 
+ command. Materialized view will be in DISABLED state in below scenarios.
+
+   * when materialized view is created.
+   * when data of fact table and materialized view are not in sync.
+  
+ User should fire REFRESH MATERIALIZED VIEW command to sync all segments of fact table with 
+ materialized view, which ENABLES the materialized view for query.
+
+ Command example:
+   ```
+     REFRESH MATERIALIZED VIEW agg_sales
+   ```
+
+### Loading data to multiple materialized views
+
+ During load to fact table, if anyone of the load to materialized view fails, then that 
+ corresponding materialized view will be DISABLED and load to other materialized views mapped 
+ to fact table will continue. 
+
+ User can fire REFRESH MATERIALIZED VIEW command to sync or else the subsequent table load 
+ will load the old failed loads along with current load and enable the disabled materialized view.
+
+ **NOTE**:
+   * In case of InsertOverwrite/Update operation on fact table, all segments of materialized view 
+     will be MARKED_FOR_DELETE and reload to Index table will happen by REFRESH MATERIALIZED VIEW, 
+     in case of materialized view which refresh on manual and once the InsertOverwrite/Update 
+     operation on fact table is finished, in case of materialized view which refresh on commit.
+   * In case of full scan query, Data Size and Index Size of fact table and materialized view 
+     will not the same, as fact table and materialized view has different column names.
+
+## Querying data
+
+ Queries are to be made on fact table. While doing query planning, internally CarbonData will check
+ for the materialized views which are associated with the fact table, and do query plan 
+ transformation accordingly.
+ 
+ User can verify whether a query can leverage materialized view or not by executing `EXPLAIN` command, 
+ which will show the transformed logical plan, and thus user can check whether materialized view 
+ is selected.
+
+## Compacting
+
+ Running Compaction command (`ALTER TABLE COMPACT`)[COMPACTION TYPE-> MINOR/MAJOR] on fact table 
+ will automatically compact the materialized view created on the fact table, once compaction 
+ on fact table is done.
+
+## Data Management
+
+ In current implementation, data consistency needs to be maintained for both fact table and 
+ materialized views. 
+ 
+ Once there is materialized view created on the fact table, following command on the fact
+ table is not supported:
+ 
+   1. Data management command: `DELETE SEGMENT`.
+   2. Schema management command: `ALTER TABLE DROP COLUMN`, `ALTER TABLE CHANGE DATATYPE`,
+      `ALTER TABLE RENAME`, `ALTER COLUMN RENAME`. Note that adding a new column is supported, and for
+      dropping columns and change datatype command, CarbonData will check whether it will impact the
+      materialized view, if not, the operation is allowed, otherwise operation will be rejected by
+      throwing exception.
+   3. Partition management command: `ALTER TABLE ADD/DROP PARTITION`. Note that dropping a partition
+      will be allowed only if partition is participating in all indexes associated with fact table.
+      Drop Partition is not allowed, if any materialized view is associated with more than one 
+      fact table. Drop Partition directly on materialized view is not allowed.
+   4. Complex Datatype's for materialized view is not supported.
+   
+ However, there is still way to support these operations on fact table, in current CarbonData
+ release, user can do as following:
+ 
+   1. Remove the materialized by `DROP MATERIALIZED VIEW` command.
+   2. Carry out the data management operation on fact table.
+   3. Create the materialized view again by `CREATE MATERIALIZED VIEW` command.
+   
+ Basically, user can manually trigger the operation by re-building the materialized view.
+
+## Time Series Support
+
+ Time series data are simply measurements or events that are tracked, monitored, down sampled, and 
+ aggregated over time. Materialized views with automatic refresh mode supports TimeSeries queries.
+
+ CarbonData provides built-in time-series udf with the below definition.
+
+   ```
+     timeseries(event_time_column, 'granularity')
+   ```
+
+ Event time columns provided in time series udf should be of TimeStamp/Date type.
+
+ Below table describes the time hierarchy and levels that can be provided in a time-series udf, 
+ so that it supports automatic roll-up in time dimension for query.
+
+ | Granularity    | Description                                           |
+ |----------------|-------------------------------------------------------|
+ | year           | Data will be aggregated over year                     |
+ | month          | Data will be aggregated over month                    | 
+ | week           | Data will be aggregated over week                     |
+ | day            | Data will be aggregated over day                      |
+ | hour           | Data will be aggregated over hour                     |
+ | thirty_minute  | Data will be aggregated over every thirty minutes     |
+ | fifteen_minute | Data will be aggregated over every fifteen minutes    |
+ | ten_minute     | Data will be aggregated over every ten minutes        |
+ | five_minute    | Data will be aggregated over every five minutes       |
+ | minute         | Data will be aggregated over every one minute         |
+ | second         | Data will be aggregated over every second             |
+
+ Time series udf having column as Date type support's only year, month, day and week granularities.
+
+ Below is the sample data loaded to the fact table **sales**.
+  
+   ```
+     order_time,          user_id, sex,    country, quantity, price
+     2016-02-23 09:01:30, c001,    male,   xxx,     100,      2
+     2016-02-23 09:01:50, c002,    male,   yyy,     200,      5
+     2016-02-23 09:03:30, c003,    female, xxx,     400,      1
+     2016-02-23 09:03:50, c004,    male,   yyy,     300,      5
+     2016-02-23 09:07:50, c005,    female, xxx,     500,      5
+   ```
+
+ Users can create materialized views with time series queries like the below example:
+
+   ```
+     CREATE MATERIALIZED VIEW agg_sales AS
+     SELECT timeseries(order_time, 'minute'),avg(price)
+     FROM sales
+     GROUP BY timeseries(order_time, 'minute')
+   ```
+ And execute the below query to check time series data. In this example, a materialized view of 
+ aggregated table on price column will be created, which will be aggregated on every one minute.
+  
+   ```
+     SELECT timeseries(order_time,'minute'), avg(price)
+     FROM sales
+     GROUP BY timeseries(order_time,'minute')
+   ```
+ Find below the result of above query aggregated over minute.
+ 
+   ```
+     +---------------------------------------+----------------+
+     |UDF:timeseries(order_time, minute)     |avg(price)      |
+     +---------------------------------------+----------------+
+     |2016-02-23 09:01:00                    |3.5             |
+     |2016-02-23 09:07:00                    |5.0             |
+     |2016-02-23 09:03:00                    |3.0             |
+     +---------------------------------------+----------------+
+   ```
+
+ The data loading, querying, compaction command and its behavior is the same as materialized views.
+
+#### How data is aggregated over time?
+
+ On each load to materialized view, data will be aggregated based on the specified time interval of 
+ granularity provided during creation and stored on each segment.
+ 
+ **NOTE**:
+   1. Single select statement cannot contain time series udf(s) neither with different granularity
+      nor with different timestamp/date columns.
+   2. Retention policies for time series is not supported yet.
+ 
+## Time Series RollUp Support
+
+ Time series queries can be rolled up from existing materialized view.
+ 
+### Query RollUp
+
+ Consider an example where the query is on hour level granularity, but the materialized view
+ with hour level granularity is not present but materialized view with minute level granularity is 
+ present, then we can get the data from minute level and the aggregate the hour level data and 
+ give output. This is called query rollup.
+ 
+ Consider if user create's below time series materialized view,
+ 
+   ```
+     CREATE MATERIALIZED VIEW agg_sales
+     AS
+     SELECT timeseries(order_time,'minute'),avg(price)
+     FROM sales
+     GROUP BY timeseries(order_time,'minute')
+   ```
+
+ and fires the below query with hour level granularity.
+ 
+   ```
+     SELECT timeseries(order_time,'hour'),avg(price)
+     FROM sales
+     GROUP BY timeseries(order_time,'hour')
+   ```
+
+ Then, the above query can be rolled up from materialized view 'agg_sales', by adding hour
+ level time series aggregation on minute level aggregation. Users can fire explain command
+ to check if query is rolled up from existing materialized view.
+ 
+  **NOTE**:
+    1. Queries cannot be rolled up, if filter contains time series function.
+    2. Roll up is not yet supported for queries having join clause or order by functions.
+  
\ No newline at end of file