You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hawq.apache.org by yo...@apache.org on 2016/08/29 16:46:42 UTC

[07/36] incubator-hawq-docs git commit: moving book configuration to new 'book' branch, for HAWQ-1027

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/hawq-reference.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/hawq-reference.html.md.erb b/reference/hawq-reference.html.md.erb
new file mode 100644
index 0000000..f5abd2a
--- /dev/null
+++ b/reference/hawq-reference.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: HAWQ Reference
+---
+
+This section provides a complete reference to HAWQ SQL commands, management utilities, configuration parameters, environment variables, and database objects.
+
+-   **[Server Configuration Parameter Reference](../reference/HAWQSiteConfig.html)**
+
+    This section describes all server configuration guc/parameters that are available in HAWQ.
+
+-   **[HDFS Configuration Reference](../reference/HDFSConfigurationParameterReference.html)**
+
+    This reference page describes HDFS configuration values that are configured for HAWQ either within `hdfs-site.xml`, `core-site.xml`, or `hdfs-client.xml`.
+
+-   **[Environment Variables](../reference/HAWQEnvironmentVariables.html)**
+
+    This topic contains a reference of the environment variables that you set for HAWQ.
+
+-   **[Character Set Support Reference](../reference/CharacterSetSupportReference.html)**
+
+    This topic provides a referene of the character sets supported in HAWQ.
+
+-   **[Data Types](../reference/HAWQDataTypes.html)**
+
+    This topic provides a reference of the data types supported in HAWQ.
+
+-   **[SQL Commands](../reference/SQLCommandReference.html)**
+
+    This�section contains a description and the syntax�of�the SQL commands supported by HAWQ.
+
+-   **[System Catalog Reference](../reference/catalog/catalog_ref.html)**
+
+    This reference describes the HAWQ system catalog tables and views.
+
+-   **[The hawq\_toolkit Administrative Schema](../reference/toolkit/hawq_toolkit.html)**
+
+    This section provides a reference on the `hawq_toolkit` administrative schema.
+
+-   **[HAWQ Management Tools Reference](../reference/cli/management_tools.html)**
+
+    Reference information for command-line utilities available in HAWQ.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ABORT.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ABORT.html.md.erb b/reference/sql/ABORT.html.md.erb
new file mode 100644
index 0000000..ab053d8
--- /dev/null
+++ b/reference/sql/ABORT.html.md.erb
@@ -0,0 +1,37 @@
+---
+title: ABORT
+---
+
+Aborts the current transaction.
+
+## <a id="synop"></a>Synopsis
+
+```pre
+ABORT [ WORK | TRANSACTION ]
+```
+
+## <a id="abort__section3"></a>Description
+
+`ABORT` rolls back the current transaction and causes all the updates made by the transaction to be discarded. This command is identical in behavior to the standard SQL command `ROLLBACK`, and is present only for historical reasons.
+
+## <a id="abort__section4"></a>Parameters
+
+<dt>WORK  
+TRANSACTION  </dt>
+<dd>Optional key words. They have no effect.</dd>
+
+## <a id="abort__section5"></a>Notes
+
+Use `COMMIT` to successfully terminate a transaction.
+
+Issuing `ABORT` when not inside a transaction does no harm, but it will provoke a warning message.
+
+## <a id="compat"></a>Compatibility
+
+This command is a HAWQ extension present for historical reasons. ROLLBACK is the equivalent standard SQL command.
+
+## <a id="see"></a>See Also
+
+[BEGIN](BEGIN.html), [COMMIT](COMMIT.html), [ROLLBACK](ROLLBACK.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ALTER-AGGREGATE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-AGGREGATE.html.md.erb b/reference/sql/ALTER-AGGREGATE.html.md.erb
new file mode 100644
index 0000000..b1131ef
--- /dev/null
+++ b/reference/sql/ALTER-AGGREGATE.html.md.erb
@@ -0,0 +1,68 @@
+---
+title: ALTER AGGREGATE
+---
+
+Changes the definition of an aggregate function.
+
+## <a id="synop"></a>Synopsis
+
+```pre
+ALTER AGGREGATE <name> ( <type> [ , ... ] ) RENAME TO <new_name>
+
+ALTER AGGREGATE <name> ( <type> [ , ... ] ) OWNER TO <new_owner>
+
+ALTER AGGREGATE <name> ( <type> [ , ... ] ) SET SCHEMA <new_schema>
+```
+
+## <a id="desc"></a>Description
+
+`ALTER AGGREGATE` changes the definition of an aggregate function.
+
+You must own the aggregate function to use `ALTER AGGREGATE`. To change the schema of an aggregate function, you must also have `CREATE` privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have `CREATE` privilege on the aggregate function\u2019s schema. (These restrictions enforce that altering the owner does not do anything you could not do by dropping and recreating the aggregate function. However, a superuser can alter ownership of any aggregate function anyway.)
+
+## <a id="alteraggregate__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing aggregate function.</dd>
+
+<dt> \<type\>   </dt>
+<dd>An input data type on which the aggregate function operates. To reference a zero-argument aggregate function, write \* in place of the list of input data types.</dd>
+
+<dt> \<new\_name\>   </dt>
+<dd>The new name of the aggregate function.</dd>
+
+<dt> \<new\_owner\>   </dt>
+<dd>The new owner of the aggregate function.</dd>
+
+<dt> \<new\_schema\>   </dt>
+<dd>The new schema for the aggregate function.</dd>
+
+## <a id="alteraggregate__section5"></a>Examples
+
+To rename the aggregate function `myavg` for type `integer` to `my_average`:
+
+```pre
+ALTER AGGREGATE myavg(integer) RENAME TO my_average;
+```
+
+To change the owner of the aggregate function `myavg` for type `integer` to `joe`:
+
+```pre
+ALTER AGGREGATE myavg(integer) OWNER TO joe;
+```
+
+To move the aggregate function `myavg` for type `integer` into schema `myschema`:
+
+```pre
+ALTER AGGREGATE myavg(integer) SET SCHEMA myschema;
+```
+
+## <a id="compat"></a>Compatibility
+
+There is no `ALTER AGGREGATE` statement in the SQL standard.
+
+## <a id="see"></a>See Also
+
+[CREATE AGGREGATE](CREATE-AGGREGATE.html),�[DROP AGGREGATE](DROP-AGGREGATE.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ALTER-DATABASE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-DATABASE.html.md.erb b/reference/sql/ALTER-DATABASE.html.md.erb
new file mode 100644
index 0000000..782daf5
--- /dev/null
+++ b/reference/sql/ALTER-DATABASE.html.md.erb
@@ -0,0 +1,52 @@
+---
+title: ALTER DATABASE
+---
+
+Changes the attributes of a database.
+
+## <a id="alterrole__section2"></a>Synopsis
+
+```pre
+ALTER DATABASE <name> SET <parameter> { TO | = } { <value> | DEFAULT } 
+
+ALTER DATABASE <name> RESET <parameter>
+```
+
+## <a id="desc"></a>Description
+
+`ALTER DATABASE` changes the attributes of a HAWQ database.
+
+`SET` and `RESET` \<parameter\> changes the session default for a configuration parameter for a HAWQ database. Whenever a new session is subsequently started in that database, the specified value becomes the session default value. The database-specific default overrides whatever setting is present in the server configuration file (`hawq-site.xml`). Only the database owner or a superuser can change the session defaults for a database. Certain parameters cannot be set this way, or can only be set by a superuser.
+
+## <a id="alterrole__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name of the database whose attributes are to be altered.
+
+**Note:** HAWQ reserves the database "hcatalog" for system use. You cannot connect to or alter the system "hcatalog" database.</dd>
+
+<dt> \<parameter\>   </dt>
+<dd>Set this database's session default for the specified configuration parameter to the given value. If value is `DEFAULT` or if `RESET` is used, the database-specific setting is removed, so the system-wide default setting will be inherited in new sessions. Use `RESET ALL` to clear all database-specific settings. See [About Server Configuration Parameters](../guc/guc_config.html#topic1) for information about user-settable configuration parameters.</dd>
+
+## <a id="notes"></a>Notes
+
+It is also possible to set a configuration parameter session default for a specific role (user) rather than to a database. Role-specific settings override database-specific ones if there is a conflict. See [ALTER ROLE](ALTER-ROLE.html).
+
+## <a id="examples"></a>Examples
+
+To set the default schema search path for the `mydatabase` database:
+
+```pre
+ALTER DATABASE mydatabase SET search_path TO myschema, 
+public, pg_catalog;
+```
+
+## <a id="compat"></a>Compatibility
+
+The `ALTER DATABASE` statement is a HAWQ extension.
+
+## <a id="see"></a>See Also
+
+[CREATE DATABASE](CREATE-DATABASE.html#topic1), [DROP DATABASE](DROP-DATABASE.html#topic1), [SET](SET.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ALTER-FUNCTION.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-FUNCTION.html.md.erb b/reference/sql/ALTER-FUNCTION.html.md.erb
new file mode 100644
index 0000000..f21a808
--- /dev/null
+++ b/reference/sql/ALTER-FUNCTION.html.md.erb
@@ -0,0 +1,108 @@
+---
+title: ALTER FUNCTION
+---
+
+Changes the definition of a function.
+
+## <a id="alterfunction__section2"></a>Synopsis
+
+``` sql
+ALTER FUNCTION <name> ( [ [<argmode>] [<argname>] <argtype> [, ...] ] )
+   <action> [, ... ] [RESTRICT]
+
+ALTER FUNCTION <name> ( [ [<argmode>] [<argname>] <argtype> [, ...] ] )
+   RENAME TO <new_name>
+
+ALTER FUNCTION <name> ( [ [<argmode>] [<argname>] <argtype> [, ...] ] )
+   OWNER TO <new_owner>
+
+ALTER FUNCTION <name> ( [ [<argmode>] [<argname>] <argtype> [, ...] ] )
+   SET SCHEMA <new_schema>
+
+```
+
+where \<action\> is one of:
+
+```pre
+{ CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT }
+{ IMMUTABLE | STABLE | VOLATILE }
+{ [EXTERNAL] SECURITY INVOKER | [EXTERNAL] SECURITY DEFINER }
+```
+
+## <a id="desc"></a>Description
+
+`ALTER FUNCTION` changes the definition of a function.�
+
+You must own the function to use `ALTER FUNCTION`. To change a function\u2019s schema, you must also have `CREATE` privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have `CREATE` privilege on the function\u2019s schema. (These restrictions enforce that altering the owner does not do anything you could not do by dropping and recreating the function. However, a superuser can alter ownership of any function anyway.)
+
+## <a id="alterfunction__section4"></a>Parameters
+
+<dt> \<name\>  </dt>
+<dd>The name (optionally schema-qualified) of an existing function.</dd>
+
+<dt>\<argmode\>  </dt>
+<dd>The mode of an argument: either `IN`, `OUT`, or `INOUT`. If omitted, the default is `IN`. Note that `ALTER FUNCTION` does not actually pay any attention to `OUT` arguments, since only the input arguments are needed to determine the function's identity. So it is sufficient to list the `IN` and `INOUT` arguments.</dd>
+
+<dt> \<argname\>  </dt>
+<dd>The name of an argument. Note that `ALTER FUNCTION` does not actually pay any attention to argument names, since only the argument data types are needed to determine the function's identity.</dd>
+
+<dt> \<argtype\>  </dt>
+<dd>The data type(s) of the function's arguments (optionally schema-qualified), if any.</dd>
+
+<dt> \<new\_name\>  </dt>
+<dd>The new name of the function.</dd>
+
+<dt> \<new\_owner\>  </dt>
+<dd>The new owner of the function. Note that if the function is marked `SECURITY DEFINER`, it will subsequently execute as the new owner.</dd>
+
+<dt> \<new\_schema\>  </dt>
+<dd>The new schema for the function.</dd>
+
+<dt>CALLED ON NULL INPUT  
+RETURNS NULL ON NULL INPUT  
+STRICT  </dt>
+<dd>`CALLED ON NULL INPUT` changes the function so that it will be invoked when some or all of its arguments are null. `RETURNS NULL ON NULL                      INPUT` or `STRICT` changes the function so that it is not invoked if any of its arguments are null; instead, a null result is assumed automatically. See `CREATE FUNCTION` for more information.</dd>
+
+<dt>IMMUTABLE  
+STABLE  
+VOLATILE  </dt>
+<dd>Change the volatility of the function to the specified setting. See `CREATE FUNCTION` for details.</dd>
+
+<dt>\[ EXTERNAL \] SECURITY INVOKER  
+\[ EXTERNAL \] SECURITY DEFINER  </dt>
+<dd>Change whether the function is a security definer or not. The key word `EXTERNAL` is ignored for SQL conformance. See `CREATE                      FUNCTION` for more information about this capability.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Ignored for conformance with the SQL standard.</dd>
+
+## <a id="notes"></a>Notes
+
+HAWQ�has limitations on the use of functions defined as `STABLE` or `VOLATILE`. See [CREATE FUNCTION](CREATE-FUNCTION.html)�for more information.
+
+## <a id="alterfunction__section6"></a>Examples
+
+To rename the function `sqrt` for type `integer` to `square_root`:
+
+``` pre
+ALTER FUNCTION sqrt(integer) RENAME TO square_root;
+```
+
+To change the owner of the function `sqrt` for type `integer` to `joe`:
+
+``` pre
+ALTER FUNCTION sqrt(integer) OWNER TO joe;
+```
+
+To change the schema of the function `sqrt` for type `integer` to `math`:
+
+``` pre
+ALTER FUNCTION sqrt(integer) SET SCHEMA math;
+```
+
+## <a id="compat"></a>Compatibility
+
+This statement is partially compatible with the `ALTER FUNCTION` statement in the SQL standard. The standard allows more properties of a function to be modified, but does not provide the ability to rename a function, make a function a security definer, or change the owner, schema, or volatility of a function. The standard also requires the `RESTRICT` key word, which is optional in HAWQ.
+
+## <a id="see"></a>See Also
+
+[CREATE AGGREGATE](CREATE-AGGREGATE.html),�[DROP AGGREGATE](DROP-AGGREGATE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ALTER-OPERATOR-CLASS.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-OPERATOR-CLASS.html.md.erb b/reference/sql/ALTER-OPERATOR-CLASS.html.md.erb
new file mode 100644
index 0000000..1d2878e
--- /dev/null
+++ b/reference/sql/ALTER-OPERATOR-CLASS.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: ALTER OPERATOR CLASS
+---
+
+Changes the definition of an operator class.
+
+## <a id="synop"></a>Synopsis
+
+``` sql
+ALTER OPERATOR CLASS <name> USING <index_method> RENAME TO <newname>
+
+ALTER OPERATOR CLASS <name> USING <index_method> OWNER TO <newowner>
+```
+
+## <a id="desc"></a>Description
+
+`ALTER OPERATOR CLASS` changes the definition of an operator class.�
+
+You must own the operator class to use `ALTER OPERATOR CLASS`. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have `CREATE` privilege on the operator class\u2019s schema. (These restrictions enforce that altering the owner does not do anything you could not do by dropping and recreating the operator class. However, a superuser can alter ownership of any operator class anyway.)
+
+## <a id="alteroperatorclass__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing operator class.</dd>
+
+<dt> \<index\_method\>   </dt>
+<dd>The name of the index method this operator class is for.</dd>
+
+<dt> \<newname\>   </dt>
+<dd>The new name of the operator class.</dd>
+
+<dt> \<newowner\>   </dt>
+<dd>The new owner of the operator class</dd>
+
+## <a id="compat"></a>Compatibility
+
+There is no `ALTER OPERATOR` statement in the SQL standard.
+
+## <a id="see"></a>See Also
+
+[CREATE OPERATOR](CREATE-OPERATOR.html), [DROP OPERATOR CLASS](DROP-OPERATOR-CLASS.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ALTER-OPERATOR.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-OPERATOR.html.md.erb b/reference/sql/ALTER-OPERATOR.html.md.erb
new file mode 100644
index 0000000..a63d838
--- /dev/null
+++ b/reference/sql/ALTER-OPERATOR.html.md.erb
@@ -0,0 +1,50 @@
+---
+title: ALTER OPERATOR
+---
+
+Changes the definition of an operator.
+
+## <a id="synop"></a>Synopsis
+
+```pre
+ALTER OPERATOR <name> ( {<lefttype> | NONE} , {<righttype> | NONE} ) 
+   OWNER TO <newowner>        
+```
+
+## <a id="desc"></a>Description
+
+`ALTER OPERATOR` changes the definition of an operator. The only currently available functionality is to change the owner of the operator.�
+
+You must own the operator to use `ALTER OPERATOR`. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have `CREATE` privilege on the operator\u2019s schema. (These restrictions enforce that altering the owner does not do anything you could not do by dropping and recreating the operator. However, a superuser can alter ownership of any operator anyway.)
+
+## <a id="alteroperator__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing operator.</dd>
+
+<dt> \<lefttype\>   </dt>
+<dd>The data type of the operator's left operand; write `NONE` if the operator has no left operand.</dd>
+
+<dt> \<righttype\>   </dt>
+<dd>The data type of the operator's right operand; write `NONE` if the operator has no right operand.</dd>
+
+<dt> \<newowner\>   </dt>
+<dd>The new owner of the operator.</dd>
+
+## <a id="example"></a>Example
+
+Change the owner of a custom operator `a @@ b` for type `text`:
+
+```pre
+ALTER OPERATOR @@ (text, text) OWNER TO joe;
+```
+
+## <a id="compat"></a>Compatibility
+
+There is no `ALTER OPERATOR` statement in the SQL standard.
+
+## <a id="see"></a>See Also
+
+[CREATE OPERATOR](CREATE-OPERATOR.html),�[DROP OPERATOR](DROP-OPERATOR.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb b/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb
new file mode 100644
index 0000000..16e4411
--- /dev/null
+++ b/reference/sql/ALTER-RESOURCE-QUEUE.html.md.erb
@@ -0,0 +1,132 @@
+---
+title: ALTER RESOURCE QUEUE
+---
+
+Modify an existing resource queue.
+
+## <a id="topic1__section2"></a>Synopsis
+
+```pre
+ALTER RESOURCE QUEUE <name> WITH (<queue_attribute>=<value> [, ... ])
+```
+
+where \<queue\_attribute\> is:
+
+```pre
+   [MEMORY_LIMIT_CLUSTER=<percentage>]
+   [CORE_LIMIT_CLUSTER=<percentage>]
+   [ACTIVE_STATEMENTS=<integer>]
+   [ALLOCATION_POLICY='even']
+   [VSEGMENT_RESOURCE_QUOTA='mem:<memory_units>']
+   [RESOURCE_OVERCOMMIT_FACTOR=<double>]
+   [NVSEG_UPPER_LIMIT=<integer>]
+   [NVSEG_LOWER_LIMIT=<integer>]
+   [NVSEG_UPPER_LIMIT_PERSEG=<double>]
+   [NVSEG_LOWER_LIMIT_PERSEG=<double>]
+```
+```
+   <memory_units> ::= {128mb|256mb|512mb|1024mb|2048mb|4096mb|
+                       8192mb|16384mb|1gb|2gb|4gb|8gb|16gb}
+   <percentage> ::= <integer>%
+```
+
+## <a id="topic1__section3"></a>Description
+
+Changes attributes for an existing resource queue in HAWQ. You cannot change the parent of an existing resource queue, and you cannot change a resource queue while it is active. Only a superuser can modify a resource queue.
+
+Resource queues with an `ACTIVE_STATEMENTS` threshold set a maximum limit on the number parallel active query statements that can be executed by roles assigned to the leaf queue. It controls the number of active queries that are allowed to run at the same time. The value for `ACTIVE_STATEMENTS` should be an integer greater than 0. If not specified, the default value is 20.
+
+When modifying the resource queue, use the MEMORY\_LIMIT\_CLUSTER and CORE\_LIMIT\_CLUSTER to tune the allowed resource usage of the resource queue. MEMORY\_LIMIT\_CLUSTER and CORE\_LIMIT\_CLUSTER must be equal for the same resource queue. In addition the sum of the percentages of MEMORY\_LIMIT\_CLUSTER (and CORE\_LIMIT\_CLUSTER) for resource queues that share the same parent cannot exceed 100%.
+
+To modify the role associated with the resource queue, use the [ALTER ROLE](ALTER-ROLE.html) or [CREATE ROLE](CREATE-ROLE.html) command. You can only assign roles to the leaf-level resource queues (resource queues that do not have any children.)
+
+The default memory allotment can be overridden on a per-query basis by using `hawq_rm_stmt_vseg_memory` and` hawq_rm_stmt_nvseg` configuration parameters. See [Configuring Resource Quotas for Query Statements](/20/resourcemgmt/ConfigureResourceManagement.html#topic_g2p_zdq_15).
+
+To see the status of a resource queue, see [Checking Existing Resource Queues](/20/resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+
+See also [Best Practices for Using Resource Queues](../../bestpractices/managing_resources_bestpractices.html#topic_hvd_pls_wv).
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>Required. The name of the resource queue you wish to modify.</dd>
+
+<!-- -->
+
+<dt>MEMORY\_LIMIT\_CLUSTER=\<percentage\> </dt>
+<dd>Required. Defines how much memory a resource queue can consume from its parent resource queue and consequently dispatch to the execution of parallel statements. The valid values are 1% to 100%. The value of MEMORY\_ LIMIT\_CLUSTER must be identical to the value of CORE\_LIMIT\_CLUSTER. The sum of values for MEMORY\_LIMIT\_CLUSTER of this queue plus other queues that share the same parent cannot exceed 100%. The HAWQ resource manager periodically validates this restriction.
+
+**Note:** If you want to increase the percentage, you may need to decrease the percentage of any resource queue(s) that share the same parent resource queue first. The total cannot exceed 100%.</dd>
+
+<dt>CORE\_LIMIT\_CLUSTER=\<percentage\> </dt>
+<dd>Required. The percentage of consumable CPU (virtual core) resources that the resource queue can take from its parent resource queue. The valid values are 1% to 100%. The value of CORE\_ LIMIT\_CLUSTER must be identical to the value of MEMORY\_LIMIT\_CLUSTER. The sum of values for CORE\_LIMIT\_CLUSTER of this queue and queues that share the same parent cannot exceed 100%.
+
+**Note:** If you want to increase the percentage, you may need to decrease the percentage of any resource queue(s) that share the same parent resource queue first. The total cannot exceed 100%.</dd>
+
+<dt>ACTIVE\_STATEMENTS=\<integer\> </dt>
+<dd>Optional. Defines the limit of the number of parallel active statements in one leaf queue. The maximum number of connections cannot exceed this limit. If this limit is reached, the HAWQ resource manager queues more query allocation requests. Note that a single session can have several concurrent statement executions that occupy multiple connection resources. The value for `ACTIVE_STATEMENTS` should be an integer greater than 0. The default value is 20.</dd>
+
+<dt>ALLOCATION\_POLICY=\<string\> </dt>
+<dd>Optional. Defines the resource allocation policy for parallel statement execution. The default value is `even`.
+
+**Note:** This release only supports an `even` allocation policy. Even if you do not specify this attribute, the resource queue still applies an `even` allocation policy. Future releases will support alternative allocation policies.
+
+Setting the allocation policy to `even` means resources are always evenly dispatched based on current concurrency. When multiple query resource allocation requests are queued, the resource queue tries to evenly dispatch resources to queued requests until one of the following conditions are encountered:
+
+-   There are no more allocated resources in this queue to dispatch, or
+-   The ACTIVE\_STATEMENTS limit has been reached
+
+For each query resource allocation request, the HAWQ resource mananger determines the minimum and maximum size of a virtual segment based on multiple factors including query cost, user configuration, table properties, and so on. For example, a hash distributed table requires fixed size of virtual segments. With an even allocation policy, the HAWQ resource manager uses the minimum virtual segment size requirement and evenly dispatches resources to each query resource allocation request in the resource queue.</dd>
+
+<dt>VSEG\_RESOURCE\_QUOTA='mem:{128mb | 256mb | 512mb | 1024mb | 2048mb | 4096mb | 8192mb | 16384mb | 1gb | 2gb | 4gb | 8gb | 16gb}'</dt>
+<dd>Optional. This quota defines how resources are split across multiple virtual segments. For example, when the HAWQ resource manager determines that 256GB memory and 128 vcores should be allocated to the current resource queue, there are multiple solutions on how to divide the resources across virtual segments. For example, you could use a) 2GB/1 vcore \* 128 virtual segments or b) 1GB/0.5 vcore \* 256 virtual segments. Therefore, you can use this attribute to make the HAWQ resource manager calculate the number of virtual segments based on how to divide the memory. For example, if `VSEGMENT_RESOURCE_QUOTA``='mem:512mb'`, then the resource queue will use 512MB/0.25 vcore \* 512 virtual segments. The default value is '`mem:256mb`'.
+
+**Note:** To avoid resource fragmentation, make sure that the segment resource capacity configured for HAWQ (in HAWQ Standalone mode: `hawq_rm_memory_limit_perseg`; in YARN mode: `yarn.nodemanager.resource.memory-mb` must be a multiple of the resource quotas for all virtual segments and CPU to memory ratio must be a multiple of the amount configured for `yarn.scheduler.minimum-allocation-mb`.</dd>
+
+<dt>RESOURCE\_OVERCOMMIT\_FACTOR=\<double\> </dt>
+<dd>Optional. This factor defines how much a resource can be overcommitted. The default value is `2.0`. For example, if RESOURCE\_OVERCOMMIT\_FACTOR is set to 3.0 and MEMORY\_LIMIT\_CLUSTER is set to 30%, then the maximum possible resource allocation in this queue is 90% (30% x 3.0). If the resulting maximum is bigger than 100%, then 100% is adopted. The minimum value that this attribute can be set to is `1.0`.</dd>
+
+<dt>NVSEG\_UPPER\_LIMIT=\<integer\> / NVSEG\_UPPER\_LIMIT\_PERSEG=\<double\>  </dt>
+<dd>Optional. These limits restrict the range of number of virtual segments allocated in this resource queue for executing one query statement. NVSEG\_UPPER\_LIMIT defines an upper limit of virtual segments for one statement execution regardless of actual cluster size, while NVSEG\_UPPER\_LIMIT\_PERSEG defines the same limit by using the average number of virtual segments in one physical segment. Therefore, the limit defined by NVSEG\_UPPER\_LIMIT\_PERSEG varies dynamically according to the changing size of the HAWQ cluster.
+
+For example, if you set `NVSEG_UPPER_LIMIT=10` all query resource requests are strictly allocated no more than 10 virtual segments. If you set NVSEG\_UPPER\_LIMIT\_PERSEG=2 and assume that currently there are 5 available HAWQ segments in the cluster, query resource requests are allocated 10 virtual segments at the most.
+
+NVSEG\_UPPER\_LIMIT cannot be set to a lower value than NVSEG\_LOWER\_LIMIT if both limits are enabled. In addition, the upper limit cannot be set to a value larger than the value set in global configuration parameter `hawq_rm_nvseg_perquery_limit` and `hawq_rm_nvseg_perquery_perseg_limit`.
+
+By default, both limits are set to **-1**, which means the limits are disabled. `NVSEG_UPPER_LIMIT` has higher priority than `NVSEG_UPPER_LIMIT_PERSEG`. If both limits are set, then `NVSEG_UPPER_LIMIT_PERSEG` is ignored. If you have enabled resource quotas for the query statement, then these limits are ignored.
+
+**Note:** If the actual lower limit of the number of virtual segments becomes greater than the upper limit, then the lower limit is automatically reduced to be equal to the upper limit. This situation is possible when user sets both `NVSEG_UPPER_LIMIT `and `NVSEG_LOWER_LIMIT_PERSEG`. After expanding the cluster, the dynamic lower limit may become greater than the value set for the fixed upper limit.</dd>
+
+<dt>NVSEG\_LOWER\_LIMIT=\<integer\> / NVSEG\_LOWER\_LIMIT\_PERSEG=\<double\>   </dt>
+<dd>Optional. These limits specify the minimum number of virtual segments allocated for one statement execution in order to guarantee query performance. NVSEG\_LOWER\_LIMIT defines the lower limit of virtual segments for one statement execution regardless the actual cluster size, while NVSEG\_LOWER\_LIMIT\_PERSEG defines the same limit by the average virtual segment number in one segment. Therefore, the limit defined by NVSEG\_LOWER\_LIMIT\_PERSEG varies dynamically along with the size of HAWQ cluster.
+
+NVSEG\_UPPER\_LIMIT\_PERSEG cannot be less than NVSEG\_LOWER\_LIMIT\_PERSEG if both limits are set enabled.
+
+For example, if you set NVSEG\_LOWER\_LIMIT=10, and one statement execution potentially needs no fewer than 10 virtual segments, then this request has at least 10 virtual segments allocated. If you set NVSEG\_UPPER\_LIMIT\_PERSEG=2, assuming there are currently 5 available HAWQ segments in the cluster, and one statement execution potentially needs no fewer than 10 virtual segments, then the query resource request will be allocated at least 10 virtual segments. If one statement execution needs at most 4 virtual segments, the resource manager will allocate at most 4 virtual segments instead of 10 since this resource request does not need more than 9 virtual segments.
+
+By default, both limits are set to **-1**, which means the limits are disabled. `NVSEG_LOWER_LIMIT` has higher priority than `NVSEG_LOWER_LIMIT_PERSEG`. If both limits are set, then `NVSEG_LOWER_LIMIT_PERSEG` is ignored. If you have enabled resource quotas for the query statement, then these limits are ignored.
+
+**Note:** If the actual lower limit of the number of virtual segments becomes greater than the upper limit, then the lower limit is automatically reduced to be equal to the upper limit. This situation is possible when user sets both `NVSEG_UPPER_LIMIT `and `NVSEG_LOWER_LIMIT_PERSEG`. After expanding the cluster, the dynamic lower limit may become greater than the value set for the fixed upper limit. </dd>
+
+## <a id="topic1__section6"></a>Examples
+
+Change the memory and core limit of a resource queue:
+
+```pre
+ALTER RESOURCE QUEUE test_queue_1 WITH (MEMORY_LIMIT_CLUSTER=40%,
+CORE_LIMIT_CLUSTER=40%);
+```
+
+Change the active statements maximum for the resource queue:
+
+```pre
+ALTER RESOURCE QUEUE test_queue_1 WITH (ACTIVE_STATEMENTS=50);
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`ALTER RESOURCE QUEUE` is a HAWQ extension. There is no provision for resource queues or workload management in the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[ALTER ROLE](ALTER-ROLE.html), [CREATE RESOURCE QUEUE](CREATE-RESOURCE-QUEUE.html), [CREATE ROLE](CREATE-ROLE.html), [DROP RESOURCE QUEUE](DROP-RESOURCE-QUEUE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ALTER-ROLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-ROLE.html.md.erb b/reference/sql/ALTER-ROLE.html.md.erb
new file mode 100644
index 0000000..687e776
--- /dev/null
+++ b/reference/sql/ALTER-ROLE.html.md.erb
@@ -0,0 +1,182 @@
+---
+title: ALTER ROLE
+---
+
+Changes a database role (user or group).
+
+## <a id="alterrole__section2"></a>Synopsis
+
+```pre
+ALTER ROLE <name> RENAME TO <newname>
+
+ALTER ROLE <name> RESET <config_parameter>
+
+ALTER ROLE <name> RESOURCE QUEUE {<queue_name> | NONE}
+
+ALTER ROLE <name> [ [WITH] <option> [ ... ] ]
+```
+
+where \<option\> can be:
+
+```pre
+      SUPERUSER | NOSUPERUSER
+    | CREATEDB | NOCREATEDB
+    | CREATEROLE | NOCREATEROLE
+    | CREATEEXTTABLE | NOCREATEEXTTABLE
+    [ ( <attribute>='<value>'[, ...] ) ]
+           where attribute and value are:
+           type='readable'|'writable'
+           protocol='gpfdist'|'http'
+    
+    | INHERIT | NOINHERIT
+    | LOGIN | NOLOGIN
+    | CONNECTION LIMIT <connlimit>
+    | [ENCRYPTED | UNENCRYPTED] PASSWORD '<password>'
+    | VALID UNTIL '<timestamp>'
+    | [ DENY <deny_point> ]
+    | [ DENY BETWEEN <deny_point> AND <deny_point>]
+    | [ DROP DENY FOR <deny_point> ]
+```
+
+## <a id="desc"></a>Description
+
+`ALTER ROLE` changes the attributes of a HAWQ role. There are several variants of this command:
+
+-   **RENAME** \u2014 Changes the name of the role. Database superusers can rename any role. Roles having `CREATEROLE` privilege can rename non-superuser roles. The current session user cannot be renamed (connect as a different user to rename a role). Because MD5-encrypted passwords use the role name as cryptographic salt, renaming a role clears its password if the password is MD5-encrypted.
+-   **SET | RESET** \u2014 changes a role\u2019s session default for a specified configuration parameter. Whenever the role subsequently starts a new session, the specified value becomes the session default, overriding whatever setting is present in server configuration file (`hawq-site.xml`). For a role without LOGIN privilege, session defaults have no effect. Ordinary roles can change their own session defaults. Superusers can change anyone\u2019s session defaults. Roles having `CREATEROLE` privilege can change defaults for non-superuser roles. See "Server Configuration Parameters" �for more information on all user-settable configuration parameters.
+-   **RESOURCE QUEUE** \u2014 Assigns the role to a workload management resource queue. The role would then be subject to the limits assigned to the resource queue when issuing queries. Specify `NONE` to assign the role to the default resource queue. A role can only belong to one resource queue. For a role without `LOGIN` privilege, resource queues have no effect. See [CREATE RESOURCE QUEUE](CREATE-RESOURCE-QUEUE.html#topic1) for more information.
+-   **WITH** \<option\> \u2014 Changes many of the role attributes that can be specified in [CREATE ROLE](CREATE-ROLE.html). Attributes not mentioned in the command retain their previous settings. Database superusers can change any of these settings for any role. Roles having `CREATEROLE` privilege can change any of these settings, but only for non-superuser roles. Ordinary roles can only change their own password.
+
+## <a id="alterrole__section4"></a>Parameters
+
+<dt> \<name\>  </dt>
+<dd>The name of the role whose attributes are to be altered.</dd>
+
+<dt> \<newname\>  </dt>
+<dd>The new name of the role.</dd>
+
+<dt> \<config\_parameter\>=\<value\>  </dt>
+<dd>Set this role's session default for the specified configuration parameter to the given value. If value is `DEFAULT` or if `RESET` is used, the role-specific variable setting is removed, so the role will inherit the system-wide default setting in new sessions. Use `RESET ALL` to clear all role-specific settings. See [SET](SET.html) and [About Server Configuration Parameters](../guc/guc_config.html#topic1) for information about user-settable configuration parameters.</dd>
+
+<dt> \<queue\_name\>  </dt>
+<dd>The name of the resource queue to which the user-level role is to be assigned. Only roles with `LOGIN` privilege can be assigned to a resource queue. To unassign a role from a resource queue and put it in the default resource queue, specify `NONE`. A role can only belong to one resource queue.</dd>
+
+<dt>SUPERUSER | NOSUPERUSER  
+CREATEDB | NOCREATEDB  
+CREATEROLE | NOCREATEROLE  
+CREATEEXTTABLE | NOCREATEEXTTABLE \[(\<attribute\>='\<value\>')\]  </dt>
+<dd>If `CREATEEXTTABLE` is specified, the role being defined is allowed to create external tables. The default `type` is `readable` and the default `protocol` is `gpfdist` if not specified. `NOCREATEEXTTABLE` (the default) denies the role the ability to create external tables. Using the `file` protocol when creating external tables is not supported. This is because HAWQ cannot guarantee scheduling executors on a specific host. Likewise, you cannot use the `execute` command with `ON ALL` and `ON HOST` for the same reason. Use the `ON MASTER/<number>/SEGMENT <segment_id>` to specify which segment instances are to execute the command.</dd>
+
+<dt>INHERIT | NOINHERIT  
+LOGIN | NOLOGIN  
+CONNECTION LIMIT \<connlimit\>  
+PASSWORD '\<password\>'  
+ENCRYPTED | UNENCRYPTED  
+VALID UNTIL '\<timestamp\>'  </dt>
+<dd>These clauses alter role attributes originally set by [CREATE ROLE](CREATE-ROLE.html).</dd>
+
+<dt>DENY \<deny\_point\>  
+DENY BETWEEN \<deny\_point\> AND \<deny\_point\>   </dt>
+<dd>The `DENY` and `DENY BETWEEN` keywords set time-based constraints that are enforced at login. `DENY`sets a day or a day and time to deny access. `DENY BETWEEN` sets an interval during which access is denied. Both use the parameter \<deny\_point\> that has following format:
+
+```pre
+DAY <day> [ TIME '<time>' ]
+```
+
+The two parts of the \<deny_point\> parameter use the following formats:
+
+For \<day\>:
+
+``` pre
+{'Sunday' | 'Monday' | 'Tuesday' |'Wednesday' | 'Thursday' | 'Friday' |
+'Saturday' | 0-6 }
+```
+
+For \<time\>:
+
+``` pre
+{ 00-23 : 00-59 | 01-12 : 00-59 { AM | PM }}
+```
+
+The `DENY BETWEEN` clause uses two \<deny\_point\> parameters.
+
+```pre
+DENY BETWEEN <deny_point> AND <deny_point>
+
+```
+</dd>
+
+<dt>DROP DENY FOR \<deny\_point\>  </dt>
+<dd>The `DROP DENY FOR` clause removes a time-based constraint from the role. It uses the \<deny\_point\> parameter described above.</dd>
+
+## Notes
+
+Use `GRANT` and `REVOKE` for adding and removing role memberships.
+
+Caution must be exercised when specifying an unencrypted password with this command. The password will be transmitted to the server in clear text, and it might also be logged in the client\u2019s command history or the server log. The `psql` command-line client contains a meta-command `\password` that can be used to safely change a role\u2019s password.
+
+It is also possible to tie a session default to a specific database rather than to a role. Role-specific settings override database-specific ones if there is a conflict.
+
+## Examples
+
+Change the password for a role:
+
+```pre
+ALTER ROLE daria WITH PASSWORD 'passwd123';
+```
+
+Change a password expiration date:
+
+```pre
+ALTER ROLE scott VALID UNTIL 'May 4 12:00:00 2015 +1';
+```
+
+Make a password valid forever:
+
+```pre
+ALTER ROLE luke VALID UNTIL 'infinity';
+```
+
+Give a role the ability to create other roles and new databases:
+
+```pre
+ALTER ROLE joelle CREATEROLE CREATEDB;
+```
+
+Give a role a non-default setting of the `maintenance_work_mem` parameter:
+
+```pre
+ALTER ROLE admin SET maintenance_work_mem = 100000;
+```
+
+Assign a role to a resource queue:
+
+```pre
+ALTER ROLE sammy RESOURCE QUEUE poweruser;
+```
+
+Give a role permission to create writable external tables:
+
+```pre
+ALTER ROLE load CREATEEXTTABLE (type='writable');
+```
+
+Alter a role so it does not allow login access on Sundays:
+
+```pre
+ALTER ROLE user3 DENY DAY 'Sunday';
+```
+
+Alter a role to remove the constraint that does not allow login access on Sundays:
+
+```pre
+ALTER ROLE user3 DROP DENY FOR DAY 'Sunday';
+```
+
+## <a id="compat"></a>Compatibility
+
+The `ALTER ROLE` statement is a HAWQ extension.
+
+## <a id="see"></a>See Also
+
+[CREATE ROLE](CREATE-ROLE.html), [DROP ROLE](DROP-ROLE.html), [SET](SET.html), [CREATE RESOURCE QUEUE](CREATE-RESOURCE-QUEUE.html), [GRANT](GRANT.html), [REVOKE](REVOKE.html)�

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ALTER-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-TABLE.html.md.erb b/reference/sql/ALTER-TABLE.html.md.erb
new file mode 100644
index 0000000..7b1d74d
--- /dev/null
+++ b/reference/sql/ALTER-TABLE.html.md.erb
@@ -0,0 +1,422 @@
+---
+title: ALTER TABLE
+---
+
+Changes the definition of a table.
+
+## <a id="altertable__section2"></a>Synopsis
+
+```pre
+ALTER TABLE [ONLY] <name> RENAME [COLUMN] <column> TO <new_column>
+
+ALTER TABLE <name> RENAME TO <new_name>
+
+ALTER TABLE <name> SET SCHEMA <new_schema>
+
+ALTER TABLE [ONLY] <name> SET 
+     DISTRIBUTED BY (<column>, [ ... ] ) 
+   | DISTRIBUTED RANDOMLY 
+   | WITH (REORGANIZE=true|false)
+ 
+ALTER TABLE [ONLY] <name>
+            <action> [, ... ]
+
+ALTER TABLE <name>
+   [ ALTER PARTITION { <partition_name> | FOR (RANK(<number>)) 
+   | FOR (<value>) } <partition_action> [...] ] 
+   <partition_action>        
+```
+
+where \<action\> is one of:
+
+```pre
+   ADD [COLUMN] <column_name> <type>
+      [ ENCODING ( <storage_directive> [,...] ) ]
+      [<column_constraint> [ ... ]]
+  DROP [COLUMN] <column> [RESTRICT | CASCADE]
+  ALTER [COLUMN] <column> TYPE <type> [USING <expression>]
+  ALTER [COLUMN] <column> SET DEFAULT <expression>
+  ALTER [COLUMN] <column> DROP DEFAULT
+  ALTER [COLUMN] <column> { SET | DROP } NOT NULL
+  ALTER [COLUMN] <column> SET STATISTICS <integer>
+  ADD <table_constraint>
+  DROP CONSTRAINT <constraint_name> [RESTRICT | CASCADE]
+  SET WITHOUT OIDS
+  INHERIT <parent_table>
+  NO INHERIT <parent_table>
+  OWNER TO <new_owner>
+         
+```
+
+where \<partition\_action\> is one of:
+
+```pre
+  ALTER DEFAULT PARTITION
+  DROP DEFAULT PARTITION [IF EXISTS]
+  DROP PARTITION [IF EXISTS] { <partition_name> | 
+    FOR (RANK(<number>)) | FOR (<value>) } [CASCADE]
+  TRUNCATE DEFAULT PARTITION
+  TRUNCATE PARTITION { <partition_name> | FOR (RANK(<number>)) | 
+    FOR (<value>) }
+  RENAME DEFAULT PARTITION TO <new_partition_name>
+  RENAME PARTITION { <partition_name> | FOR (RANK(<number>)) | 
+        FOR (<value>) } TO <new_partition_name>
+  ADD DEFAULT PARTITION <name> [ ( <subpartition_spec> ) ]
+  ADD PARTITION <name>
+            <partition_element>
+      [ ( <subpartition_spec> ) ]
+  EXCHANGE DEFAULT PARTITION WITH TABLE <table_name>
+        [ WITH | WITHOUT VALIDATION ]
+  EXCHANGE PARTITION { <partition_name> | FOR (RANK(<number>)) | 
+        FOR (<value>) } WITH TABLE <table_name>
+        [ WITH | WITHOUT VALIDATION ]
+  SET SUBPARTITION TEMPLATE (<subpartition_spec>)
+  SPLIT DEFAULT PARTITION
+     { AT (<list_value>)
+     | START([<datatype>] <range_value>) [INCLUSIVE | EXCLUSIVE] 
+        END([<datatype>] <range_value>) [INCLUSIVE | EXCLUSIVE] }
+    [ INTO ( PARTITION <new_partition_name>, 
+             PARTITION <default_partition_name> ) ]
+  SPLIT PARTITION { <partition_name> | FOR (RANK(<number>)) | 
+    FOR (<value>) } AT (<value>) 
+    [ INTO (PARTITION <partition_name>, PARTITION <partition_name>)]
+```
+
+where \<partition\_element\> is:
+
+```pre
+    VALUES (<list_value> [,...] )
+  | START ([<datatype>] '<start_value>') [INCLUSIVE | EXCLUSIVE]
+    [ END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE] ]
+  | END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE]
+[ WITH ( <partition_storage_parameter>=<value> [, ... ] ) ]
+[ TABLESPACE <tablespace> ]
+```
+
+where \<subpartition\_spec\> is:
+
+```pre
+            <subpartition_element> [, ...]
+```
+
+and \<subpartition\_element\> is:
+
+```pre
+  DEFAULT SUBPARTITION <subpartition_name>
+  | [SUBPARTITION <subpartition_name>] VALUES (<list_value> [,...] )
+  | [SUBPARTITION <subpartition_name>] 
+     START ([<datatype>] '<start_value>') [INCLUSIVE | EXCLUSIVE]
+     [ END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE] ]
+     [ EVERY ( [<number> | <datatype>] '<interval_value>') ]
+  | [SUBPARTITION <subpartition_name>] 
+     END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE]
+    [ EVERY ( [<number> | <datatype>] '<interval_value>') ]
+[ WITH ( <partition_storage_parameter>=<value> [, ... ] ) ]
+[ TABLESPACE <tablespace> ]
+```
+
+where \<storage\_parameter\> is:
+
+```pre
+   APPENDONLY={TRUE}
+   BLOCKSIZE={8192-2097152}
+   ORIENTATION={ROW | PARQUET}
+   COMPRESSTYPE={ZLIB|SNAPPY|GZIP|NONE}
+   COMPRESSLEVEL={0-9}
+   FILLFACTOR={10-100}
+   OIDS[=TRUE|FALSE]
+```
+
+where \<storage\_directive\> is:
+
+```pre
+   COMPRESSTYPE={ZLIB|SNAPPY|GZIP|NONE} 
+ | COMPRESSLEVEL={0-9} 
+ | BLOCKSIZE={8192-2097152}
+```
+
+where \<column\_reference\_storage\_directive\> is:
+
+```pre
+   COLUMN <column_name> ENCODING ( <storage_directive> [, ... ] ), ... 
+ | DEFAULT COLUMN ENCODING ( <storage_directive> [, ... ] )
+```
+
+**Note:**
+When using multi-level partition designs, the following operations are not supported with ALTER TABLE:
+
+-   ADD DEFAULT PARTITION
+-   ADD PARTITION
+-   DROP DEFAULT PARTITION
+-   DROP PARTITION
+-   SPLIT PARTITION
+-   All operations that involve modifying subpartitions.
+
+## <a id="limitations"></a>Limitations
+
+HAWQ does not support using `ALTER TABLE` to `ADD` or `DROP` a column in an existing Parquet table.
+
+## <a id="altertable__section4"></a>Parameters
+
+
+<dt>ONLY  </dt>
+<dd>Only perform the operation on the table name specified. If the `ONLY` keyword is not used, the operation       will be performed on the named table and any child table partitions associated with that table.</dd>
+
+<dt>\<name\>  </dt>
+<dd>The name (possibly schema-qualified) of an existing table to alter. If `ONLY` is specified, only that table is altered. If `ONLY` is not specified, the table and all its descendant tables (if any) are updated.
+
+*Note:* Constraints can only be added to an entire table, not to a partition. Because of that restriction, the \<name\> parameter can only contain a table name, not a partition name.</dd>
+
+<dt> \<column\>   </dt>
+<dd>Name of a new or existing column. Note that HAWQ distribution key columns must be treated with special care. Altering or dropping these columns can change the distribution policy for the table.</dd>
+
+<dt> \<new\_column\>   </dt>
+<dd>New name for an existing column.</dd>
+
+<dt> \<new\_name\>   </dt>
+<dd>New name for the table.</dd>
+
+<dt> \<type\>   </dt>
+<dd>Data type of the new column, or new data type for an existing column. If changing the data type of a HAWQ distribution key column, you are only allowed to change it to a compatible type (for example, `text` to `varchar` is OK, but `text` to `int` is not).</dd>
+
+<dt> \<table\_constraint\>   </dt>
+<dd>New table constraint for the table. Note that foreign key constraints are currently not supported in HAWQ. Also a table is only allowed one unique constraint and the uniqueness must be within the HAWQ distribution key.</dd>
+
+<dt> \<constraint\_name\>   </dt>
+<dd>Name of an existing constraint to drop.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the dropped column or constraint (for example, views referencing the column).</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the column or constraint if there are any dependent objects. This is the default behavior.</dd>
+
+<dt>ALL  </dt>
+<dd>Disable or enable all triggers belonging to the table including constraint related triggers. This requires superuser privilege.</dd>
+
+<dt>USER  </dt>
+<dd>Disable or enable all user-created triggers belonging to the table.</dd>
+
+<dt>DISTRIBUTED RANDOMLY | DISTRIBUTED BY (\<column\>)  </dt>
+<dd>Specifies the distribution policy for a table. The default is RANDOM distribution. Changing a distribution policy will cause the table data to be physically redistributed on disk, which can be resource intensive. If you declare the same distribution policy or change from random to hash distribution, data will not be redistributed unless you declare `SET WITH (REORGANIZE=true)`.</dd>
+
+<dt>REORGANIZE=true|false  </dt>
+<dd>Use `REORGANIZE=true` when the distribution policy has not changed or when you have changed from a random to a hash distribution, and you want to redistribute the data anyways.</dd>
+
+<dt> \<parent\_table\>   </dt>
+<dd>A parent table to associate or de-associate with this table.</dd>
+
+<dt> \<new\_owner\>   </dt>
+<dd>The role name of the new owner of the table.</dd>
+
+<dt> \<new\_tablespace\>   </dt>
+<dd>The name of the tablespace to which the table will be moved.</dd>
+
+<dt> \<new\_schema\>   </dt>
+<dd>The name of the schema to which the table will be moved.</dd>
+
+<dt> \<parent\_table\_name\>   </dt>
+<dd>When altering a partitioned table, the name of the top-level parent table.</dd>
+
+<dt>ALTER \[DEFAULT\] PARTITION  </dt>
+<dd>If altering a partition deeper than the first level of partitions, the `ALTER PARTITION` clause is used to specify which subpartition in the hierarchy you want to alter.</dd>
+
+<dt>DROP \[DEFAULT\] PARTITION  </dt>
+<dd>**Note:** Cannot be used with multi-level partitions.
+
+Drops the specified partition. If the partition has subpartitions, the subpartitions are automatically dropped as well.</dd>
+
+<dt>TRUNCATE \[DEFAULT\] PARTITION  </dt>
+<dd>Truncates the specified partition. If the partition has subpartitions, the subpartitions are automatically truncated as well.</dd>
+
+<dt>RENAME \[DEFAULT\] PARTITION  </dt>
+<dd>Changes the partition name of a partition (not the relation name). Partitioned tables are created using the naming convention: \<*parentname*\>\_\<*level*\>\_prt\_\<*partition\_name*\>.</dd>
+
+<dt>ADD DEFAULT PARTITION  </dt>
+<dd>**Note:** Cannot be used with multi-level partitions.
+
+Adds a default partition to an existing partition design. When data does not match to an existing partition, it is inserted into the default partition. Partition designs that do not have a default partition will reject incoming rows that do not match to an existing partition. Default partitions must be given a name.</dd>
+
+<dt>ADD PARTITION  </dt>
+<dd>**Note:** Cannot be used with multi-level partitions.
+
+\<partition\_element\> - Using the existing partition type of the table (range or list), defines the boundaries of new partition you are adding.
+
+\<name\> - A name for this new partition.
+
+**VALUES** - For list partitions, defines the value(s) that the partition will contain.
+
+**START** - For range partitions, defines the starting range value for the partition. By default, start values are `INCLUSIVE`. For example, if you declared a start date of `'2008-01-01'`, then the partition would contain all dates greater than or equal to `'2008-01-01'`. Typically the data type of the `START` expression is the same type as the partition key column. If that is not the case, then you must explicitly cast to the intended data type.
+
+**END** - For range partitions, defines the ending range value for the partition. By default, end values are `EXCLUSIVE`. For example, if you declared an end date of `'2008-02-01'`, then the partition would contain all dates less than but not equal to `'2008-02-01'`. Typically the data type of the `END` expression is the same type as the partition key column. If that is not the case, then you must explicitly cast to the intended data type.
+
+**WITH** - Sets the table storage options for a partition. For example, you may want older partitions to be append-only tables and newer partitions to be regular heap tables. See `CREATE TABLE` for a description of the storage options.
+
+**TABLESPACE** - The name of the tablespace in which the partition is to be created.
+
+\<subpartition\_spec\> - Only allowed on partition designs that were created without a subpartition template. Declares a subpartition specification for the new partition you are adding. If the partitioned table was originally defined using a subpartition template, then the template will be used to generate the subpartitions automatically.</dd>
+
+<dt>EXCHANGE \[DEFAULT\] PARTITION  </dt>
+<dd>Exchanges another table into the partition hierarchy into the place of an existing partition. In a multi-level partition design, you can only exchange the lowest level partitions (those that contain data).
+
+**WITH TABLE** \<table\_name\> - The name of the table you are swapping in to the partition design.
+
+**WITH** | **WITHOUT VALIDATION** - Validates that the data in the table matches the `CHECK` constraint of the partition you are exchanging. The default is to validate the data against the `CHECK` constraint.</dd>
+
+<dt>SET SUBPARTITION TEMPLATE  </dt>
+<dd>Modifies the subpartition template for an existing partition. After a new subpartition template is set, all new partitions added will have the new subpartition design (existing partitions are not modified).</dd>
+
+<dt>SPLIT DEFAULT PARTITION  </dt>
+<dd>**Note:** Cannot be used with multi-level partitions.
+
+Splits a default partition. In a multi-level partition design, you can only split the lowest level default partitions (those that contain data). Splitting a default partition creates a new partition containing the values specified and leaves the default partition containing any values that do not match to an existing partition.
+
+**AT** - For list partitioned tables, specifies a single list value that should be used as the criteria for the split.
+
+**START** - For range partitioned tables, specifies a starting value for the new partition.
+
+**END** - For range partitioned tables, specifies an ending value for the new partition.
+
+**INTO** - Allows you to specify a name for the new partition. When using the `INTO` clause to split a default partition, the second partition name specified should always be that of the existing default partition. If you do not know the name of the default partition, you can look it up using the `pg_partitions` view.</dd>
+
+<dt>SPLIT PARTITION  </dt>
+<dd>**Note:** Cannot be used with multi-level partitions.
+
+Splits an existing partition into two partitions. In a multi-level partition design, you can only split the lowest level partitions (those that contain data).
+
+**AT** - Specifies a single value that should be used as the criteria for the split. The partition will be divided into two new partitions with the split value specified being the starting range for the *latter* partition.
+
+**INTO** - Allows you to specify names for the two new partitions created by the split.</dd>
+
+<dt> \<partition\_name\>   </dt>
+<dd>The given name of a partition.</dd>
+
+<dt>FOR (RANK(\<number\>))  </dt>
+<dd>For range partitions, the rank of the partition in the range.</dd>
+
+<dt>FOR ('\<value\>')  </dt>
+<dd>Specifies a partition by declaring a value that falls within the partition boundary specification. If the value declared with `FOR` matches to both a partition and one of its subpartitions (for example, if the value is a date and the table is partitioned by month and then by day), then `FOR` will operate on the first level where a match is found (for example, the monthly partition). If your intent is to operate on a subpartition, you must declare so as follows:
+
+``` pre
+ALTER TABLE name ALTER PARTITION FOR ('2008-10-01') DROP PARTITION FOR ('2008-10-01');
+```
+</dd>
+
+## <a id="notes"></a>Notes
+
+Take special care when altering or dropping columns that are part of the HAWQ distribution key as this can change the distribution policy for the table. HAWQ does not currently support foreign key constraints.
+
+**Note:** Note: The table name specified in the `ALTER TABLE` command cannot be the name of a partition within a table.
+
+Adding a `CHECK` or `NOT NULL` constraint requires scanning the table to verify that existing rows meet the constraint.
+
+When a column is added with `ADD COLUMN`, all existing rows in the table are initialized with the column\u2019s default value (`NULL` if no `DEFAULT` clause is specified). Adding a column with a non-null default or changing the type of an existing column will require the entire table to be rewritten. This may take a significant amount of time for a large table; and it will temporarily require double the disk space.
+
+You can specify multiple changes in a single `ALTER TABLE` command, which will be done in a single pass over the table.
+
+The `DROP COLUMN` form does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent insert and update operations in the table will store a null value for the column. Thus, dropping a column is quick but it will not immediately reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing rows are updated.
+
+The fact that `ALTER TYPE` requires rewriting the whole table is sometimes an advantage, because the rewriting process eliminates any dead space in the table. For example, to reclaim the space occupied by a dropped column immediately, the fastest way is: `ALTER TABLE <table> ALTER COLUMN <anycol> TYPE <sametype>;` Where�\<anycol\>�is any remaining table column and�\<sametype\>�is the same type that column already has. This results in no semantically-visible change in the table, but the command forces rewriting, which gets rid of no-longer-useful data.
+
+If a table is partitioned or has any descendant tables, it is not permitted to add, rename, or change the type of a column in the parent table without doing the same to the descendants. This ensures that the descendants always have columns matching the parent.
+
+A recursive `DROP COLUMN` operation will remove a descendant table\u2019s column only if the descendant does not inherit that column from any other parents and never had an independent definition of the column. A nonrecursive `DROP COLUMN` (`ALTER TABLE ONLY ... DROP COLUMN`) never removes any descendant columns, but instead marks them as independently defined rather than inherited.
+
+The `OWNER` action never recurse to descendant tables; that is, they always act as though `ONLY` were specified. Adding a constraint can recurse only for `CHECK` constraints.
+
+Changing any part of a system catalog table is not permitted.
+
+## <a id="examples"></a>Examples
+
+Add a column to a table:
+
+``` pre
+ALTER TABLE distributors ADD COLUMN address varchar(30);
+```
+
+Rename an existing column:
+
+``` pre
+ALTER TABLE distributors RENAME COLUMN address TO city;
+```
+
+Rename an existing table:
+
+``` pre
+ALTER TABLE distributors RENAME TO suppliers;
+```
+
+Add a not-null constraint to a column:
+
+``` pre
+ALTER TABLE distributors ALTER COLUMN street SET NOT NULL;
+```
+
+Add a check constraint to a table:
+
+``` pre
+ALTER TABLE distributors ADD CONSTRAINT zipchk CHECK (char_length(zipcode) = 5);
+```
+
+Move a table to a different schema:
+
+``` pre
+ALTER TABLE myschema.distributors SET SCHEMA yourschema;
+```
+
+Add a new partition to a partitioned table:
+
+``` pre
+ALTER TABLE sales ADD PARTITION
+        START (date '2009-02-01') INCLUSIVE 
+        END (date '2009-03-01') EXCLUSIVE; 
+```
+
+Add a default partition to an existing partition design:
+
+``` pre
+ALTER TABLE sales ADD DEFAULT PARTITION other;
+```
+
+Rename a partition:
+
+``` pre
+ALTER TABLE sales RENAME PARTITION FOR ('2008-01-01') TO jan08;
+```
+
+Drop the first (oldest) partition in a range sequence:
+
+``` pre
+ALTER TABLE sales DROP PARTITION FOR (RANK(1));
+```
+
+Exchange a table into your partition design:
+
+``` pre
+ALTER TABLE sales EXCHANGE PARTITION FOR ('2008-01-01') WITH TABLE jan08;
+```
+
+Split the default partition (where the existing default partition\u2019s name is `other`) to add a new monthly partition for January 2009:
+
+``` pre
+ALTER TABLE sales SPLIT DEFAULT PARTITION
+    START ('2009-01-01') INCLUSIVE
+    END ('2009-02-01') EXCLUSIVE
+    INTO (PARTITION jan09, PARTITION other);
+```
+
+Split a monthly partition into two with the first partition containing dates January 1-15 and the second partition containing dates January 16-31:
+
+``` pre
+ALTER TABLE sales SPLIT PARTITION FOR ('2008-01-01')
+    AT ('2008-01-16')
+    INTO (PARTITION jan081to15, PARTITION jan0816to31);
+```
+
+## <a id="compat"></a>Compatibility
+
+The `ADD`, `DROP`, and `SET DEFAULT` forms conform with the SQL standard. The other forms are HAWQ extensions of the SQL standard. Also, the ability to specify more than one manipulation in a single `ALTER                TABLE` command is an extension. `ALTER TABLE DROP COLUMN` can be used to drop the only column of a table, leaving a zero-column table. This is an extension of SQL, which disallows zero-column tables.
+
+## <a id="altertable__section8"></a>See Also
+
+[CREATE TABLE](CREATE-TABLE.html), [DROP TABLE](DROP-TABLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ALTER-TABLESPACE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-TABLESPACE.html.md.erb b/reference/sql/ALTER-TABLESPACE.html.md.erb
new file mode 100644
index 0000000..e539177
--- /dev/null
+++ b/reference/sql/ALTER-TABLESPACE.html.md.erb
@@ -0,0 +1,55 @@
+---
+title: ALTER TABLESPACE
+---
+
+Changes the definition of a tablespace.
+
+## <a id="synopsis"></a>Synopsis
+
+``` pre
+ALTER TABLESPACE <name> RENAME TO <newname>
+
+ALTER TABLESPACE <name> OWNER TO <newowner>
+         
+```
+
+## <a id="desc"></a>Description
+
+`ALTER TABLESPACE` changes the definition of a tablespace.
+
+You must own the tablespace to use `ALTER TABLESPACE`. To alter the owner, you must also be a direct or indirect member of the new owning role. (Note that superusers have these privileges automatically.)
+
+## <a id="altertablespace__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name of an existing tablespace.</dd>
+
+<dt> \<newname\>   </dt>
+<dd>The new name of the tablespace. The new name cannot begin with *pg\_* (reserved for system tablespaces).</dd>
+
+<dt> \<newowner\>   </dt>
+<dd>The new owner of the tablespace.</dd>
+
+## <a id="altertablespace__section5"></a>Examples
+
+Rename tablespace `index_space` to `fast_raid`:
+
+``` pre
+ALTER TABLESPACE index_space RENAME TO fast_raid;
+```
+
+Change the owner of tablespace `index_space`:
+
+``` pre
+ALTER TABLESPACE index_space OWNER TO mary;
+```
+
+## <a id="altertablespace__section6"></a>Compatibility
+
+There is no `ALTER TABLESPACE` statement in the SQL standard.
+
+## <a id="see"></a>�See Also
+
+[CREATE TABLESPACE](CREATE-TABLESPACE.html), [DROP TABLESPACE](DROP-TABLESPACE.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ALTER-TYPE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-TYPE.html.md.erb b/reference/sql/ALTER-TYPE.html.md.erb
new file mode 100644
index 0000000..da50e80
--- /dev/null
+++ b/reference/sql/ALTER-TYPE.html.md.erb
@@ -0,0 +1,54 @@
+---
+title: ALTER TYPE
+---
+
+Changes the definition of a data type.
+
+## <a id="synopsis"></a>Synopsis
+
+``` pre
+ALTER TYPE <name>
+���OWNER TO <new_owner> | SET SCHEMA <new_schema>
+         
+```
+
+## <a id="desc"></a>Description
+
+�`ALTER TYPE` changes the definition of an existing type. You can change the owner and the schema of a type.
+
+You must own the type to use `ALTER TYPE`. To change the schema of a type, you must also have `CREATE` privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have `CREATE` privilege on the type's schema. (These restrictions enforce that altering the owner does not do anything that could be done by dropping and recreating the type. However, a superuser can alter ownership of any type.)
+
+## <a id="altertype__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing type to alter.</dd>
+
+<dt> \<new\_owner\>   </dt>
+<dd>The user name of the new owner of the type.</dd>
+
+<dt> \<new\_schema\>   </dt>
+<dd>The new schema for the type.</dd>
+
+## <a id="altertype__section5"></a>Examples
+
+To change the owner of the user-defined type `email` to `joe`:
+
+``` pre
+ALTER TYPE email OWNER TO joe;
+```
+
+To change the schema of the user-defined type `email` to `customers`:
+
+``` pre
+ALTER TYPE email SET SCHEMA customers;
+```
+
+## <a id="altertype__section6"></a>Compatibility
+
+There is no `ALTER TYPE` statement in the SQL standard.
+
+## <a id="see"></a>See Also
+
+[CREATE TYPE](CREATE-TYPE.html),�[DROP TYPE](DROP-TYPE.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ALTER-USER.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ALTER-USER.html.md.erb b/reference/sql/ALTER-USER.html.md.erb
new file mode 100644
index 0000000..f53e788
--- /dev/null
+++ b/reference/sql/ALTER-USER.html.md.erb
@@ -0,0 +1,44 @@
+---
+title: ALTER USER
+---
+
+Changes the definition of a database role (user).
+
+## <a id="alteruser__section2"></a>Synopsis
+
+``` pre
+ALTER USER <name> RENAME TO <newname>
+
+ALTER USER <name> SET <config_parameter> {TO | =} {<value> | DEFAULT}
+
+ALTER USER <name> RESET <config_parameter>
+
+ALTER USER <name> [ [WITH] <option> [ ... ] ]
+```
+
+where \<option\> can be:
+
+``` pre
+      SUPERUSER | NOSUPERUSER
+    | CREATEDB | NOCREATEDB
+    | CREATEROLE | NOCREATEROLE
+    | CREATEUSER | NOCREATEUSER
+    | INHERIT | NOINHERIT
+    | LOGIN | NOLOGIN
+    | [ ENCRYPTED | UNENCRYPTED ] PASSWORD '<password>'
+    | VALID UNTIL '<timestamp>'
+```
+
+## <a id="alteruser__section3"></a>Description
+
+`ALTER USER` is a deprecated command but is still accepted for historical reasons. It is an alias for `ALTER ROLE`. See `ALTER ROLE` for more information.
+
+## <a id="alteruser__section4"></a>Compatibility
+
+The `ALTER USER` statement is a HAWQ extension. The SQL standard leaves the definition of users to the implementation.
+
+## <a id="see"></a>See Also
+
+[ALTER ROLE](ALTER-ROLE.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/ANALYZE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/ANALYZE.html.md.erb b/reference/sql/ANALYZE.html.md.erb
new file mode 100644
index 0000000..983696a
--- /dev/null
+++ b/reference/sql/ANALYZE.html.md.erb
@@ -0,0 +1,75 @@
+---
+title: ANALYZE
+---
+
+Collects statistics about a database.
+
+## <a id="synopsis"></a>Synopsis
+
+``` pre
+ANALYZE [VERBOSE] [ROOTPARTITION] <table> [ (<column> [, ...] ) ]]
+```
+
+## <a id="desc"></a>Description
+
+`ANALYZE` collects statistics about the contents of tables in the database, and stores the results in the system table�`pg_statistic`. Subsequently, the query planner uses these statistics to help determine the most efficient execution plans for queries.
+
+With no parameter, `ANALYZE` examines every table in the current database. With a parameter, `ANALYZE` examines only that table. It is further possible to give a list of column names, in which case only the statistics for those columns are collected.
+
+## <a id="params"></a>Parameters
+
+<dt>VERBOSE  </dt>
+<dd>Enables display of progress messages. When specified, `ANALYZE` emits progress messages to indicate which table is currently being processed. Various statistics about the tables are printed as well.</dd>
+
+<dt>ROOTPARTITION  </dt>
+<dd>For partitioned tables, `ANALYZE` on the parent (the root in multi-level partitioning) table without this option will collect statistics on each individual leaf partition as well as the global partition table, both of which are needed for query planning. In scenarios when all the individual child partitions have up-to-date statistics (for example, after loading and analyzing a daily partition), the `ROOTPARTITION` option can be used to collect only the global stats on the partition table. This could save the time of re-analyzing each individual leaf partition.
+
+If you use `ROOTPARTITION` on a non-root or non-partitioned table, `ANALYZE` will skip the option and issue a�warning.�You can also analyze all root partition tables in the database by using `ROOTPARTITION ALL`
+
+**Note:** Use `ROOTPARTITION ALL` to analyze all root partition tables in the database.</dd>
+
+<dt> \<table\>   </dt>
+<dd>The name (possibly schema-qualified) of a specific table to analyze. Defaults to all tables in the current database.</dd>
+
+<dt> \<column\>   </dt>
+<dd>The name of a specific column to analyze. Defaults to all columns.</dd>
+
+## <a id="notes"></a>Notes
+
+It is a good idea to run `ANALYZE` periodically, or just after making major changes in the contents of a table. Accurate statistics will help the query planner to choose the most appropriate query plan, and thereby improve the speed of query processing. A common strategy is to run `VACUUM` and `ANALYZE` once a day during a low-usage time of day.
+
+`ANALYZE` requires only a read lock on the target table, so it can run in parallel with other activity on the table.
+
+`ANALYZE` skips tables if the user is not the table owner or database owner.
+
+The statistics collected by `ANALYZE` usually include a list of some of the most common values in each column and a histogram showing the approximate data distribution in each column. One or both of these may be omitted if `ANALYZE` deems them uninteresting (for example, in a unique-key column, there are no common values) or if the column data type does not support the appropriate operators.
+
+For large tables, `ANALYZE` takes a random sample of the table contents, rather than examining every row. This allows even very large tables to be analyzed in a small amount of time. Note, however, that the statistics are only approximate, and will change slightly each time `ANALYZE` is run, even if the actual table contents did not change. This may result in small changes in the planner\u2019s estimated costs shown by `EXPLAIN`. In rare situations, this non-determinism will cause the query optimizer to choose a different query plan between runs of `ANALYZE`. To avoid this, raise the amount of statistics collected by `ANALYZE` by adjusting the�`default_statistics_target`�configuration parameter, or on a column-by-column basis by setting the per-column statistics target with `ALTER                TABLE ... ALTER COLUMN ... SET STATISTICS` (see `ALTER             TABLE`). The target value sets the maximum number of entries in the most-common-value list and the maximum number of bins in
  the histogram. The default target value is 10, but this can be adjusted up or down to trade off accuracy of planner estimates against the time taken for `ANALYZE` and the amount of space occupied in�`pg_statistic`. In particular, setting the statistics target to zero disables collection of statistics for that column. It may be useful to do that for columns that are never used as part of the `WHERE`, `GROUP                BY`, or `ORDER BY` clauses of queries, since the planner will have no use for statistics on such columns.
+
+The largest statistics target among the columns being analyzed determines the number of table rows sampled to prepare the statistics. Increasing the target causes a proportional increase in the time and space needed to do `ANALYZE`.
+
+The `pxf_enable_stat_collection` server configuration parameter determines if `ANALYZE` calculates statistics for PXF readable tables. When `pxf_enable_stat_collection` is true, the default setting, `ANALYZE` estimates the number of tuples in the table from the total size of the table, the size of the first fragment, and the number of tuples in the first fragment. Then it builds a sample table and calculates statistics for the PXF table by running statistics queries on the sample table, the same as it does with native tables. A sample table is always created to calculate PXF table statistics, even when the table has a small number of rows.
+
+The `pxf_stat_max_fragments` configuration parameter, default 100, sets the maximum number of fragments that are sampled to build the sample table. Setting `pxf_stat_max_fragments` to a higher value provides a more uniform sample, but decreases `ANALYZE` performance. Setting it to a lower value increases performance, but the statistics are calculated on a less uniform sample.
+
+When `pxf_stat_max_fragments` is false, `ANALYZE` outputs a message to warn that it is skipping the PXF table because `pxf_stat_max_fragments` is turned off.
+
+There may be situations where the remote statistics retrieval could fail to perform a task on a PXF table. For example, if a PXF Java component is down, the remote statistics retrieval might not occur, and the database transaction would not succeed. In these cases, the statistics remain with the default external table values.
+
+## <a id="examples"></a>Examples
+
+Collect statistics for the table�`mytable`:
+
+``` pre
+ANALYZE mytable;
+```
+
+## <a id="compat"></a>Compatibility
+
+There is no ANALYZE statement in the SQL standard.
+
+## <a id="see"></a>See Also
+
+[ALTER TABLE](ALTER-TABLE.html), [EXPLAIN](EXPLAIN.html), [VACUUM](VACUUM.html)
+
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/BEGIN.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/BEGIN.html.md.erb b/reference/sql/BEGIN.html.md.erb
new file mode 100644
index 0000000..5c2a9bb
--- /dev/null
+++ b/reference/sql/BEGIN.html.md.erb
@@ -0,0 +1,58 @@
+---
+title: BEGIN
+---
+
+Starts a transaction block.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+BEGIN [WORK | TRANSACTION] [SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED]
+      [READ WRITE | READ ONLY]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`BEGIN` initiates a transaction block, that is, all statements after a `BEGIN` command will be executed in a single transaction until an explicit `COMMIT` or `ROLLBACK` is given. By default (without `BEGIN`), HAWQ executes transactions in autocommit mode, that is, each statement is executed in its own transaction and a commit is implicitly performed at the end of the statement (if execution was successful, otherwise a rollback is done).
+
+Statements are executed more quickly in a transaction block, because transaction start/commit requires significant CPU and disk activity. Execution of multiple statements inside a transaction is also useful to ensure consistency when making several related changes: other sessions will be unable to see the intermediate states wherein not all the related updates have been done.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>WORK  
+TRANSACTION  </dt>
+<dd>Optional key words. They have no effect.</dd>
+
+<dt>SERIALIZABLE  
+REPEATABLE READ  
+READ COMMITTED  
+READ UNCOMMITTED  </dt>
+<dd>The SQL standard defines four transaction isolation levels: `READ COMMITTED`, `READ UNCOMMITTED`, `SERIALIZABLE`, and `REPEATABLE READ`. The default behavior is that a statement can only see rows committed before it began (`READ COMMITTED`). In HAWQ, `READ UNCOMMITTED` is treated the same as `READ COMMITTED`. `SERIALIZABLE` is supported the same as `REPEATABLE                      READ` wherein all statements of the current transaction can only see rows committed before the first statement was executed in the transaction. `SERIALIZABLE` is the strictest transaction isolation. This level emulates serial transaction execution, as if transactions had been executed one after another, serially, rather than concurrently. Applications using this level must be prepared to retry transactions due to serialization failures.</dd>
+
+<dt>READ WRITE  
+READ ONLY  </dt>
+<dd>Determines whether the transaction is read/write or read-only. Read/write is the default. When a transaction is read-only, the following SQL commands are disallowed: `INSERT`, `UPDATE`, `DELETE`, and `COPY FROM` if the table they would write to is not a temporary table; all `CREATE`, `ALTER`, and `DROP` commands; `GRANT`, `REVOKE`, `TRUNCATE`; and `EXPLAIN ANALYZE` and `EXECUTE` if the command they would execute is among those listed.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Use [COMMIT](COMMIT.html) or [ROLLBACK](ROLLBACK.html) to terminate a transaction block.
+
+Issuing `BEGIN` when already inside a transaction block will provoke a warning message. The state of the transaction is not affected. To nest transactions within a transaction block, use savepoints (see [SAVEPOINT](SAVEPOINT.html)).
+
+## <a id="topic1__section6"></a>Examples
+
+To begin a transaction block:
+
+``` pre
+BEGIN;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`BEGIN` is a HAWQ language extension. It is equivalent to the SQL-standard command `START TRANSACTION�lang="EN"`.
+
+Incidentally, the `BEGIN` key word is used for a different purpose in embedded SQL. You are advised to be careful about the transaction semantics when porting database applications.
+
+## <a id="topic1__section8"></a>See Also
+
+[COMMIT](COMMIT.html), [ROLLBACK](ROLLBACK.html), [SAVEPOINT](SAVEPOINT.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CHECKPOINT.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CHECKPOINT.html.md.erb b/reference/sql/CHECKPOINT.html.md.erb
new file mode 100644
index 0000000..d699013
--- /dev/null
+++ b/reference/sql/CHECKPOINT.html.md.erb
@@ -0,0 +1,23 @@
+---
+title: CHECKPOINT
+---
+
+Forces a transaction log checkpoint.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CHECKPOINT
+```
+
+## <a id="topic1__section3"></a>Description
+
+Write-Ahead Logging (WAL) puts a checkpoint in the transaction log every so often. The automatic checkpoint interval is set per HAWQ segment instance by the server configuration parameters `checkpoint\_segments` and `checkpoint\_timeout`. The `CHECKPOINT` command forces an immediate checkpoint when the command is issued, without waiting for a scheduled checkpoint.
+
+A checkpoint is a point in the transaction log sequence at which all data files have been updated to reflect the information in the log. All data files will be flushed to disk.
+
+Only superusers may call `CHECKPOINT`. The command is not intended for use during normal operation.
+
+## <a id="topic1__section4"></a>Compatibility
+
+The `CHECKPOINT` command is a HAWQ language extension.

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CLOSE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CLOSE.html.md.erb b/reference/sql/CLOSE.html.md.erb
new file mode 100644
index 0000000..ae9c958
--- /dev/null
+++ b/reference/sql/CLOSE.html.md.erb
@@ -0,0 +1,45 @@
+---
+title: CLOSE
+---
+
+Closes a cursor.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CLOSE <cursor_name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CLOSE` frees the resources associated with an open cursor. After the cursor is closed, no subsequent operations are allowed on it. A cursor should be closed when it is no longer needed.
+
+Every non-holdable open cursor is implicitly closed when a transaction is terminated by `COMMIT` or `ROLLBACK`. A holdable cursor is implicitly closed if the transaction that created it aborts via `ROLLBACK`. If the creating transaction successfully commits, the holdable cursor remains open until an explicit `CLOSE` is executed, or the client disconnects.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<cursor\_name\>   </dt>
+<dd>The name of an open cursor to close.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+HAWQ does not have an explicit `OPEN` cursor statement. A cursor is considered open when it is declared. Use the `DECLARE` statement to declare (and open) a cursor.
+
+You can see all available cursors by querying the `pg_cursors` system view.
+
+## <a id="topic1__section6"></a>Examples
+
+Close the cursor `portala`:
+
+``` pre
+CLOSE portala;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CLOSE` is fully conforming with the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[DECLARE](DECLARE.html), [FETCH](FETCH.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/COMMIT.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/COMMIT.html.md.erb b/reference/sql/COMMIT.html.md.erb
new file mode 100644
index 0000000..dd91969
--- /dev/null
+++ b/reference/sql/COMMIT.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: COMMIT
+---
+
+Commits the current transaction.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+COMMIT [WORK | TRANSACTION]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`COMMIT` commits the current transaction. All changes made by the transaction become visible to others and are guaranteed to be durable if a crash occurs.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>WORK  
+TRANSACTION  </dt>
+<dd>Optional key words. They have no effect.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Use [ROLLBACK](ROLLBACK.html) to abort a transaction.
+
+Issuing `COMMIT` when not inside a transaction does no harm, but it will provoke a warning message.
+
+## <a id="topic1__section6"></a>Examples
+
+To commit the current transaction and make all changes permanent:
+
+``` pre
+COMMIT;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The SQL standard only specifies the two forms `COMMIT` and `COMMIT           WORK`. Otherwise, this command is fully conforming.
+
+## <a id="topic1__section8"></a>See Also
+
+[BEGIN](BEGIN.html), [END](END.html), [ROLLBACK](ROLLBACK.html)