You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hawq.apache.org by yo...@apache.org on 2017/01/10 23:54:06 UTC

[15/57] [abbrv] [partial] incubator-hawq-docs git commit: HAWQ-1254 Fix/remove book branching on incubator-hawq-docs

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-OPERATOR-CLASS.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-OPERATOR-CLASS.html.md.erb b/markdown/reference/sql/CREATE-OPERATOR-CLASS.html.md.erb
new file mode 100644
index 0000000..9c093c1
--- /dev/null
+++ b/markdown/reference/sql/CREATE-OPERATOR-CLASS.html.md.erb
@@ -0,0 +1,153 @@
+---
+title: CREATE OPERATOR CLASS
+---
+
+Defines a new operator class.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE OPERATOR CLASS <name> [DEFAULT] FOR TYPE <data_type>��
+  USING <index_method> AS 
+��{ 
+��OPERATOR <strategy_number>
+            <op_name> [(<op_type>, <op_type>)] [RECHECK]
+��| FUNCTION <support_number>
+            <funcname> (<argument_type> [, ...] )
+��| STORAGE <storage_type>
+��} [, ... ]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE OPERATOR CLASS` creates a new operator class. An operator class defines how a particular data type can be used with an index. The operator class specifies that certain operators will fill particular roles or strategies for this data type and this index method. The operator class also specifies the support procedures to be used by the index method when the operator class is selected for an index column. All the operators and functions used by an operator class must be defined before the operator class is created. Any functions used to implement the operator class must be defined as `IMMUTABLE`.
+
+`CREATE OPERATOR CLASS` does not presently check whether the operator class definition includes all the operators and functions required by the index method, nor whether the operators and functions form a self-consistent set. It is the user's responsibility to define a valid operator class.
+
+You must be a superuser to create an operator class.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The (optionally schema-qualified) name of the operator class to be defined. Two operator classes in the same schema can have the same name only if they are for different index methods.</dd>
+
+<dt>DEFAULT  </dt>
+<dd>Makes the operator class the default operator class for its data type. At most one operator class can be the default for a specific data type and index method.</dd>
+
+<dt> \<data\_type\>   </dt>
+<dd>The column data type that this operator class is for.</dd>
+
+<dt> \<index\_method\>   </dt>
+<dd>The name of the index method this operator class is for. Choices are `btree`, `bitmap`, and `gist`.</dd>
+
+<dt> \<strategy\_number\>   </dt>
+<dd>The operators associated with an operator class are identified by \<strategy number\>s, which serve to identify the semantics of each operator within the context of its operator class. For example, B-trees impose a strict ordering on keys, lesser to greater, and so operators like *less than* and *greater than or equal to* are interesting with respect to a B-tree. These strategies can be thought of as generalized operators. Each operator class specifies which actual operator corresponds to each strategy for a particular data type and interpretation of the index semantics. The corresponding strategy numbers for each index method are as follows: <a id="topic1__bx145491"></a>
+
+<span class="tablecap">Table 1. B-tree and Bitmap Strategies</span>
+
+| Operation             | Strategy Number |
+|-----------------------|-----------------|
+| less than             | 1               |
+| less than or equal    | 2               |
+| equal                 | 3               |
+| greater than or equal | 4               |
+| greater than          | 5               |
+
+<span class="tablecap">Table 2. GiST Two-Dimensional Strategies (R-Tree)</span>
+
+<a id="topic1__bx145491a"></a>
+
+| Operation                   | Strategy Number |
+|-----------------------------|-----------------|
+| strictly left of            | 1               |
+| does not extend to right of | 2               |
+| overlaps                    | 3               |
+| does not extend to left of  | 4               |
+| strictly right of           | 5               |
+| same                        | 6               |
+| contains                    | 7               |
+| contained by                | 8               |
+| does not extend above       | 9               |
+| strictly below              | 10              |
+| strictly above              | 11              |
+</dd>
+
+<dt> \<operator\_name\>   </dt>
+<dd>The name (optionally schema-qualified) of an operator associated with the operator class.</dd>
+
+<dt> \<op\_type\>   </dt>
+<dd>The operand data type(s) of an operator, or `NONE` to signify a left-unary or right-unary operator. The operand data types may be omitted in the normal case where they are the same as the operator class data type.</dd>
+
+<dt>RECHECK  </dt>
+<dd>If present, the index is "lossy" for this operator, and so the rows retrieved using the index must be rechecked to verify that they actually satisfy the qualification clause involving this operator.</dd>
+
+<dt> \<support\_number\>   </dt>
+<dd>Index methods require additional support routines in order to work. These operations are administrative routines used internally by the index methods. As with strategies, the operator class identifies which specific functions should play each of these roles for a given data type and semantic interpretation. The index method defines the set of functions it needs, and the operator class identifies the correct functions to use by assigning them to the *support function numbers* as follows: <a id="topic1__bx145974"></a>
+
+<span class="tablecap">Table 3. B-tree and Bitmap Support Functions</span>
+
+| Function                                                                                                                                                                | Support Number |
+|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|
+| Compare two keys and return an integer less than zero, zero, or greater than zero, indicating whether the first key is less than, equal to, or greater than the second. | 1              |
+
+
+<span class="tablecap">Table 4. GiST Support Functions</span>
+
+<a id="topic1__bx145974a"></a>
+
+| Function                                                                                                                      | Support Number |
+|-------------------------------------------------------------------------------------------------------------------------------|----------------|
+| consistent - determine whether key satisfies the query qualifier.                                                             | 1              |
+| union - compute union of a set of keys.                                                                                       | 2              |
+| compress - compute a compressed representation of a key or value to be indexed.                                               | 3              |
+| decompress - compute a decompressed representation of a compressed key.                                                       | 4              |
+| penalty - compute penalty for inserting new key into subtree with given subtree's key.                                        | 5              |
+| picksplit - determine which entries of a page are to be moved to the new page and compute the union keys for resulting pages. | 6              |
+| equal - compare two keys and return true if they are equal.                                                                   | 7              |
+</dd>
+
+<dt> \<funcname\>   </dt>
+<dd>The name (optionally schema-qualified) of a function that is an index method support procedure for the operator class.</dd>
+
+<dt> \<argument\_types\>   </dt>
+<dd>The parameter data type(s) of the function.</dd>
+
+<dt> \<storage\_type\>   </dt>
+<dd>The data type actually stored in the index. Normally this is the same as the column data type, but the GiST index method allows it to be different. The `STORAGE` clause must be omitted unless the index method allows a different type to be used.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Because the index machinery does not check access permissions on functions before using them, including a function or operator in an operator class is the same as granting public execute permission on it. This is usually not an issue for the sorts of functions that are useful in an operator class.
+
+The operators should not be defined by SQL functions. A SQL function is likely to be inlined into the calling query, which will prevent the optimizer from recognizing that the query matches an index.
+
+Any functions used to implement the operator class must be defined as `IMMUTABLE`.
+
+## <a id="topic1__section6"></a>Examples
+
+The following example command defines a GiST index operator class for the data type `_int4` (array of int4):
+
+``` pre
+CREATE OPERATOR CLASS gist__int_ops
+    DEFAULT FOR TYPE _int4 USING gist AS
+        OPERATOR 3 &&,
+        OPERATOR 6 = RECHECK,
+        OPERATOR 7 @>,
+        OPERATOR 8 <@,
+        OPERATOR 20 @@ (_int4, query_int),
+        FUNCTION 1 g_int_consistent (internal, _int4, int4),
+        FUNCTION 2 g_int_union (bytea, internal),
+        FUNCTION 3 g_int_compress (internal),
+        FUNCTION 4 g_int_decompress (internal),
+        FUNCTION 5 g_int_penalty (internal, internal, internal),
+        FUNCTION 6 g_int_picksplit (internal, internal),
+        FUNCTION 7 g_int_same (_int4, _int4, internal);
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE OPERATOR CLASS` is a HAWQ extension. There is no `CREATE                OPERATOR CLASS` statement in the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[ALTER OPERATOR CLASS](ALTER-OPERATOR-CLASS.html), [DROP OPERATOR CLASS](DROP-OPERATOR-CLASS.html), [CREATE FUNCTION](CREATE-FUNCTION.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-OPERATOR.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-OPERATOR.html.md.erb b/markdown/reference/sql/CREATE-OPERATOR.html.md.erb
new file mode 100644
index 0000000..570d226
--- /dev/null
+++ b/markdown/reference/sql/CREATE-OPERATOR.html.md.erb
@@ -0,0 +1,171 @@
+---
+title: CREATE OPERATOR
+---
+
+Defines a new operator.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE OPERATOR <name> ( 
+�������PROCEDURE = <funcname>
+�������[, LEFTARG = <lefttype>] [, RIGHTARG = <righttype>]
+�������[, COMMUTATOR = <com_op>] [, NEGATOR = <neg_op>]
+�������[, RESTRICT = <res_proc>] [, JOIN = <join_proc>]
+�������[, HASHES] [, MERGES]
+�������[, SORT1 = <left_sort_op>] [, SORT2 = <right_sort_op>]
+�������[, LTCMP = <less_than_op>] [, GTCMP = <greater_than_op>] )
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE OPERATOR` defines a new operator. The user who defines an operator becomes its owner.
+
+The operator name is a sequence of up to `NAMEDATALEN`-1 (63 by default) characters from the following list: `` + - * / < > = ~ ! @ # % ^                     & | ` ? ``
+
+There are a few restrictions on your choice of name:
+
+-   `--` and `/*` cannot appear anywhere in an operator name, since they will be taken as the start of a comment.
+-   A multicharacter operator name cannot end in `+` or `-`, unless the name also contains at least one of these characters: `` ~ ! @ # % ^ & | ` ? ``
+
+For example, `@-` is an allowed operator name, but `*-` is not. This restriction allows HAWQ to parse SQL-compliant commands without requiring spaces between tokens.
+
+The operator `!=` is mapped to `<>` on input, so these two names are always equivalent.
+
+At least one of `LEFTARG` and `RIGHTARG` must be defined. For binary operators, both must be defined. For right unary operators, only `LEFTARG` should be defined, while for left unary operators only `RIGHTARG` should be defined.
+
+The \<funcname\> procedure must have been previously defined using `CREATE FUNCTION`, must be `IMMUTABLE`, and must be defined to accept the correct number of arguments (either one or two) of the indicated types.
+
+The other clauses specify optional operator optimization clauses. These clauses should be provided whenever appropriate to speed up queries that use the operator. But if you provide them, you must be sure that they are correct. Incorrect use of an optimization clause can result in server process crashes, subtly wrong output, or other unexpected results. You can always leave out an optimization clause if you are not sure about it.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The (optionally schema-qualified) name of the operator to be defined. Two operators in the same schema can have the same name if they operate on different data types.</dd>
+
+<dt> \<funcname\>   </dt>
+<dd>The function used to implement this operator (must be an `IMMUTABLE` function).</dd>
+
+<dt> \<lefttype\>   </dt>
+<dd>The data type of the operator's left operand, if any. This option would be omitted for a left-unary operator.</dd>
+
+<dt> \<righttype\>   </dt>
+<dd>The data type of the operator's right operand, if any. This option would be omitted for a right-unary operator.</dd>
+
+<dt> \<com\_op\>   </dt>
+<dd>The optional `COMMUTATOR` clause names an operator that is the commutator of the operator being defined. We say that operator A is the commutator of operator B if (x A y) equals (y B x) for all possible input values x, y. Notice that B is also the commutator of A. For example, operators `<` and `>` for a particular data type are usually each others commutators, and operator + is usually commutative with itself. But operator `-` is usually not commutative with anything. The left operand type of a commutable operator is the same as the right operand type of its commutator, and vice versa. So the name of the commutator operator is all that needs to be provided in the `COMMUTATOR` clause.</dd>
+
+<dt> \<neg\_op\>   </dt>
+<dd>The optional `NEGATOR` clause names an operator that is the negator of the operator being defined. We say that operator A is the negator of operator B if both return Boolean results and (x A y) equals NOT (x B y) for all possible inputs x, y. Notice that B is also the negator of A. For example, `<` and `>=` are a negator pair for most data types. An operator's negator must have the same left and/or right operand types as the operator to be defined, so only the operator name need be given in the `NEGATOR` clause.</dd>
+
+<dt> \<res\_proc\>   </dt>
+<dd>The optional `RESTRICT` names a restriction selectivity estimation function for the operator. Note that this is a function name, not an operator name. `RESTRICT` clauses only make sense for binary operators that return `boolean`. The idea behind a restriction selectivity estimator is to guess what fraction of the rows in a table will satisfy a `WHERE`-clause condition of the form:
+
+``` pre
+column OP constant
+```
+
+for the current operator and a particular constant value. This assists the optimizer by giving it some idea of how many rows will be eliminated by `WHERE` clauses that have this form.
+
+You can usually just use one of the following system standard estimator functions for many of your own operators:
+
+`eqsel` for =
+
+`neqsel` for &lt;&gt;
+
+`scalarltsel` for &lt; or &lt;=
+
+`scalargtsel` for &gt; or &gt;=
+</dd>
+
+<dt> \<join\_proc\>   </dt>
+<dd>The optional `JOIN` clause names a join selectivity estimation function for the operator. Note that this is a function name, not an operator name. `JOIN` clauses only make sense for binary operators that return `boolean`. The idea behind a join selectivity estimator is to guess what fraction of the rows in a pair of tables will satisfy a `WHERE`-clause condition of the form:
+
+``` pre
+table1.column1 OP table2.column2
+```
+
+for the current operator. This helps the optimizer by letting it figure out which of several possible join sequences is likely to take the least work.
+
+You can usually just use one of the following system standard join selectivity estimator functions for many of your own operators:
+
+`eqjoinsel` for =
+
+`neqjoinsel` for &lt;&gt;
+
+`scalarltjoinsel` for &lt; or &lt;=
+
+`scalargtjoinsel` for &gt; or &gt;=
+
+`areajoinsel` for 2D area-based comparisons
+
+`positionjoinsel` for 2D position-based comparisons
+
+`contjoinsel` for 2D containment-based comparisons
+</dd>
+
+<dt>HASHES  </dt>
+<dd>The optional `HASHES` clause tells the system that it is permissible to use the hash join method for a join based on this operator. `HASHES` only makes sense for a binary operator that returns `boolean`. The hash join operator can only return true for pairs of left and right values that hash to the same hash code. If two values get put in different hash buckets, the join will never compare them at all, implicitly assuming that the result of the join operator must be false. So it never makes sense to specify `HASHES` for operators that do not represent equality.
+
+To be marked `HASHES`, the join operator must appear in a hash index operator class. Attempts to use the operator in hash joins will fail at run time if no such operator class exists. The system needs the operator class to find the data-type-specific hash function for the operator's input data type. You must also supply a suitable hash function before you can create the operator class. Care should be exercised when preparing a hash function, because there are machine-dependent ways in which it might fail to do the right thing.</dd>
+
+<dt>MERGES  </dt>
+<dd>The `MERGES` clause, if present, tells the system that it is permissible to use the merge-join method for a join based on this operator. `MERGES` only makes sense for a binary operator that returns `boolean`, and in practice the operator must represent equality for some data type or pair of data types.
+
+Merge join is based on the idea of sorting the left- and right-hand tables into order and then scanning them in parallel. So, both data types must be capable of being fully ordered, and the join operator must be one that can only succeed for pairs of values that fall at the same place in the sort order. In practice this means that the join operator must behave like equality. It is possible to merge-join two distinct data types so long as they are logically compatible. For example, the smallint-versus-integer equality operator is merge-joinable. We only need sorting operators that will bring both data types into a logically compatible sequence.
+
+Execution of a merge join requires that the system be able to identify four operators related to the merge-join equality operator: less-than comparison for the left operand data type, less-than comparison for the right operand data type, less-than comparison between the two data types, and greater-than comparison between the two data types. It is possible to specify these operators individually by name, as the `SORT1`, `SORT2`, `LTCMP`, and `GTCMP` options respectively. The system will fill in the default names if any of these are omitted when `MERGES` is specified.</dd>
+
+<dt> \<left\_sort\_op\>   </dt>
+<dd>If this operator can support a merge join, the less-than operator that sorts the left-hand data type of this operator. `<` is the default if not specified.</dd>
+
+<dt> \<right\_sort\_op\>   </dt>
+<dd>If this operator can support a merge join, the less-than operator that sorts the right-hand data type of this operator. `<` is the default if not specified.</dd>
+
+<dt> \<less\_than\_op\>   </dt>
+<dd>If this operator can support a merge join, the less-than operator that compares the input data types of this operator. `<` is the default if not specified.</dd>
+
+<dt> \<greater\_than\_op\>   </dt>
+<dd>If this operator can support a merge join, the greater-than operator that compares the input data types of this operator. `>` is the default if not specified.
+
+To give a schema-qualified operator name in optional arguments, use the `OPERATOR()` syntax, for example:
+
+``` pre
+COMMUTATOR = OPERATOR(myschema.===) ,
+```
+</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Any functions used to implement the operator must be defined as `IMMUTABLE`.
+
+## <a id="topic1__section6"></a>Examples
+
+Here is an example of creating an operator for adding two complex numbers, assuming we have already created the definition of type `complex`. First define the function that does the work, then define the operator:
+
+``` pre
+CREATE FUNCTION complex_add(complex, complex)
+    RETURNS complex
+    AS 'filename', 'complex_add'
+    LANGUAGE C IMMUTABLE STRICT;
+CREATE OPERATOR + (
+    leftarg = complex,
+    rightarg = complex,
+    procedure = complex_add,
+    commutator = +
+);
+```
+
+To use this operator in a query:
+
+``` pre
+SELECT (a + b) AS c FROM test_complex;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE OPERATOR` is a HAWQ language extension. The SQL standard does not provide for user-defined operators.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE FUNCTION](CREATE-FUNCTION.html), [CREATE TYPE](CREATE-TYPE.html), [ALTER OPERATOR](ALTER-OPERATOR.html), [DROP OPERATOR](DROP-OPERATOR.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb b/markdown/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb
new file mode 100644
index 0000000..8f9fe93
--- /dev/null
+++ b/markdown/reference/sql/CREATE-RESOURCE-QUEUE.html.md.erb
@@ -0,0 +1,139 @@
+---
+title: CREATE RESOURCE QUEUE
+---
+
+Defines a new resource queue.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE RESOURCE QUEUE <name> WITH (<queue_attribute>=<value> [, ... ])
+```
+
+where \<queue\_attribute\> is:
+
+``` pre
+    PARENT=<queue_name>
+    MEMORY_LIMIT_CLUSTER=<percentage>
+����CORE_LIMIT_CLUSTER=<percentage>
+����[ACTIVE_STATEMENTS=<integer>]
+    [ALLOCATION_POLICY='even']
+    [VSEG_RESOURCE_QUOTA='mem:<memory_units>']
+    [RESOURCE_OVERCOMMIT_FACTOR=<double>]
+    [NVSEG_UPPER_LIMIT=<integer>]
+    [NVSEG_LOWER_LIMIT=<integer>]
+    [NVSEG_UPPER_LIMIT_PERSEG=<double>]
+    [NVSEG_LOWER_LIMIT_PERSEG=<double>]
+```
+```
+    <memory_units> ::= {128mb|256mb|512mb|1024mb|2048mb|4096mb|
+                        8192mb|16384mb|1gb|2gb|4gb|8gb|16gb}
+    <percentage> ::= <integer>%
+```
+
+## <a id="topic1__section3"></a>Description
+
+Creates a new resource queue for HAWQ workload management. A resource queue must specify a parent queue. Only a superuser can create a resource queue.
+
+Resource queues with an `ACTIVE_STATEMENTS` threshold set a maximum limit on the number of queries that can be executed by roles assigned to that queue. It controls the number of active queries that are allowed to run at the same time. The value for `ACTIVE_STATEMENTS` should be an integer greater than 0. If not specified, the default value is 20.
+
+When creating the resource queue, use the MEMORY\_LIMIT\_CLUSTER and CORE\_LIMIT\_CLUSTER queue attributes to tune the allowed resource usage of the resource queue. MEMORY\_LIMIT\_CLUSTER and CORE\_LIMIT\_CLUSTER must set to the same value for a resource queue. In addition the sum of the percentages of MEMORY\_LIMIT\_CLUSTER (and CORE\_LIMIT\_CLUSTER) for resource queues that share the same parent cannot exceed 100%.
+
+You can optionally configure the maximum or minimum number of virtual segments to use when executing a query by setting NVSEG\_UPPER\_LIMIT/NVSEG\_LOWER\_LIMIT or NVSEG\_UPPER\_LIMIT\_PERSEG/NVSEG\_LOWER\_LIMIT\_PERSEG attributes for the resource queue.
+
+After defining a resource queue, you can assign a role to the queue by using the [ALTER ROLE](ALTER-ROLE.html) or [CREATE ROLE](CREATE-ROLE.html) command. You can only assign roles to the leaf-level resource queues (resource queues that do not have any children.)
+
+See also [Best Practices for Using Resource Queues](../../bestpractices/managing_resources_bestpractices.html#topic_hvd_pls_wv).
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>\<name\> </dt>
+<dd>Required. The name of the resource queue. The name must not already be in use and must not be `pg_default` or `pg_root`.</dd>
+
+<dt>PARENT=\<queue\_name\> </dt>
+<dd>Required. The parent queue of the new resource queue. The parent queue must already exist. This attribute is used to organize resource queues into a tree structure. You cannot specify `pg_default` as a parent queue. Resource queues that are parents to other resource queues are also called branch queues. Resource queues without any children are also called leaf queues. If you do not have any existing resource queues, use `pg_root` as the starting point for new resource queues.
+
+The parent queue cannot have any roles assigned.</dd>
+
+<dt>MEMORY\_LIMIT\_CLUSTER=\<percentage\>  </dt>
+<dd>Required. Defines how much memory a resource queue can consume from its parent resource queue and consequently dispatch to the execution of parallel statements. Since a resource queue obtains its memory from its parent, the actual memory limit is based from its parent queue. The valid values are 1% to 100%. The value of MEMORY\_ LIMIT\_CLUSTER must be identical to the value of CORE\_LIMIT\_CLUSTER. The sum of values for MEMORY\_LIMIT\_CLUSTER of this queue plus other queues that share the same parent cannot exceed 100%. The HAWQ resource manager periodically validates this restriction.</dd>
+
+<dt>CORE\_LIMIT\_CLUSTER=\<percentage\> </dt>
+<dd>Required. The percentage of consumable CPU (virtual core) resources that the resource queue can take from its parent resource queue. The valid values are 1% to 100%. The value of MEMORY\_ LIMIT\_CLUSTER must be identical to the value of CORE\_LIMIT\_CLUSTER. The sum of values for CORE\_LIMIT\_CLUSTER of this queue and queues that share the same parent cannot exceed 100%.</dd>
+
+<dt>ACTIVE\_STATEMENTS=\<integer\> </dt>
+<dd>Optional. Defines the limit of the number of parallel active statements in one leaf queue. The maximum number of connections cannot exceed this limit. If this limit is reached, the HAWQ resource manager queues more query allocation requests. Note that a single session can have several concurrent statement executions that occupy multiple connection resources. The value for `ACTIVE_STATEMENTS` should be an integer greater than 0. The default value is 20.</dd>
+
+<dt>ALLOCATION\_POLICY=\<string\> </dt>
+<dd>Optional. Defines the resource allocation policy for parallel statement execution. The default value is `even`.
+
+**Note:** This release only supports an `even` allocation policy. Even if you do not specify this attribute, the resource queue still applies an `even` allocation policy. Future releases will support alternative allocation policies.
+
+Setting the allocation policy to `even` means resources are always evenly dispatched based on current concurrency. When multiple query resource allocation requests are queued, the resource queue tries to evenly dispatch resources to queued requests until one of the following conditions are encountered:
+
+-   There are no more allocated resources in this queue to dispatch, or
+-   The ACTIVE\_STATEMENTS limit has been reached
+
+For each query resource allocation request, the HAWQ resource manager determines the minimum and maximum size of a virtual segment based on multiple factors including query cost, user configuration, table properties, and so on. For example, a hash distributed table requires fixed size of virtual segments. With an even allocation policy, the HAWQ resource manager uses the minimum virtual segment size requirement and evenly dispatches resources to each query resource allocation request in the resource queue.</dd>
+
+<dt>VSEG\_RESOURCE\_QUOTA='mem:{128mb | 256mb | 512mb | 1024mb | 2048mb | 4096mb | 8192mb | 16384mb | 1gb | 2gb | 4gb | 8gb | 16gb}' </dt>
+<dd>Optional. This quota defines how resources are split across multiple virtual segments. For example, when the HAWQ resource manager determines that 256GB memory and 128 vcores should be allocated to the current resource queue, there are multiple solutions on how to divide the resources across virtual segments. For example, you could use a) 2GB/1 vcore \* 128 virtual segments or b) 1GB/0.5 vcore \* 256 virtual segments. Therefore, you can use this attribute to make the HAWQ resource manager calculate the number of virtual segments based on how to divide the memory. For example, if `VSEG_RESOURCE_QUOTA``='mem:512mb'`, then the resource queue will use 512MB/0.25 vcore \* 512 virtual segments. The default value is '`mem:256mb`'.
+
+**Note:** To avoid resource fragmentation, make sure that the segment resource capacity configured for HAWQ (in HAWQ Standalone mode: `hawq_rm_memory_limit_perseg`; in YARN mode: `yarn.nodemanager.resource.memory-mb` must be a multiple of the resource quotas for all virtual segments and CPU to memory ratio must be a multiple of the amount configured for `yarn.scheduler.minimum-allocation-mb`.</dd>
+
+<dt>RESOURCE\_OVERCOMMIT\_FACTOR=\<double\> </dt>
+<dd>Optional. This factor defines how much a resource can be overcommitted. For example, if RESOURCE\_OVERCOMMIT\_FACTOR is set to 3.0 and MEMORY\_LIMIT\_CLUSTER is set to 30%, then the maximum possible resource allocation in this queue is 90% (30% x 3.0). If the resulting maximum is bigger than 100%, then 100% is adopted. The minimum value that this attribute can be set to is `1.0`. The default value is `2.0`.</dd>
+
+<dt>NVSEG\_UPPER\_LIMIT=\<integer\> / NVSEG\_UPPER\_LIMIT\_PERSEG=\<double\>  </dt>
+<dd>Optional. These limits restrict the range of number of virtual segments allocated in this resource queue for executing one query statement. NVSEG\_UPPER\_LIMIT defines an upper limit of virtual segments for one statement execution regardless of actual cluster size, while NVSEG\_UPPER\_LIMIT\_PERSEG defines the same limit by using the average number of virtual segments in one physical segment. Therefore, the limit defined by NVSEG\_UPPER\_LIMIT\_PERSEG varies dynamically according to the changing size of the HAWQ cluster.
+
+For example, if you set `NVSEG_UPPER_LIMIT=10` all query resource requests are strictly allocated no more than 10 virtual segments. If you set NVSEG\_UPPER\_LIMIT\_PERSEG=2 and assume that currently there are 5 available HAWQ segments in the cluster, query resource requests are allocated 10 virtual segments at the most.
+
+NVSEG\_UPPER\_LIMIT cannot be set to a lower value than NVSEG\_LOWER\_LIMIT if both limits are enabled. In addition, the upper limit cannot be set to a value larger than the value set in global configuration parameter `hawq_rm_nvseg_perquery_limit` and `hawq_rm_nvseg_perquery_perseg_limit`.
+
+By default, both limits are set to **-1**, which means the limits are disabled. `NVSEG_UPPER_LIMIT` has higher priority than `NVSEG_UPPER_LIMIT_PERSEG`. If both limits are set, then `NVSEG_UPPER_LIMIT_PERSEG` is ignored. If you have enabled resource quotas for the query statement, then these limits are ignored.
+
+**Note:** If the actual lower limit of the number of virtual segments becomes greater than the upper limit, then the lower limit is automatically reduced to be equal to the upper limit. This situation is possible when user sets both `NVSEG_UPPER_LIMIT `and `NVSEG_LOWER_LIMIT_PERSEG`. After expanding the cluster, the dynamic lower limit may become greater than the value set for the fixed upper limit.</dd>
+
+<dt>NVSEG\_LOWER\_LIMIT=\<integer\> / NVSEG\_LOWER\_LIMIT\_PERSEG=\<double\>   </dt>
+<dd>Optional. These limits specify the minimum number of virtual segments allocated for one statement execution in order to guarantee query performance. NVSEG\_LOWER\_LIMIT defines the lower limit of virtual segments for one statement execution regardless the actual cluster size, while NVSEG\_LOWER\_LIMIT\_PERSEG defines the same limit by the average virtual segment number in one segment. Therefore, the limit defined by NVSEG\_LOWER\_LIMIT\_PERSEG varies dynamically along with the size of HAWQ cluster.
+
+NVSEG\_UPPER\_LIMIT\_PERSEG cannot be less than NVSEG\_LOWER\_LIMIT\_PERSEG if both limits are set enabled.
+
+For example, if you set NVSEG\_LOWER\_LIMIT=10, and one statement execution potentially needs no fewer than 10 virtual segments, then this request has at least 10 virtual segments allocated. If you set NVSEG\_UPPER\_LIMIT\_PERSEG=2, assuming there are currently 5 available HAWQ segments in the cluster, and one statement execution potentially needs no fewer than 10 virtual segments, then the query resource request will be allocated at least 10 virtual segments. If one statement execution needs at most 4 virtual segments, the resource manager will allocate at most 4 virtual segments instead of 10 since this resource request does not need more than 9 virtual segments.
+
+By default, both limits are set to **-1**, which means the limits are disabled. `NVSEG_LOWER_LIMIT` has higher priority than `NVSEG_LOWER_LIMIT_PERSEG`. If both limits are set, then `NVSEG_LOWER_LIMIT_PERSEG` is ignored. If you have enabled resource quotas for the query statement, then these limits are ignored.
+
+**Note:** If the actual lower limit of the number of virtual segments becomes greater than the upper limit, then the lower limit is automatically reduced to be equal to the upper limit. This situation is possible when user sets both `NVSEG_UPPER_LIMIT `and `NVSEG_LOWER_LIMIT_PERSEG`. After expanding the cluster, the dynamic lower limit may become greater than the value set for the fixed upper limit. </dd>
+
+## <a id="topic1__section5"></a>Notes
+
+To check on the configuration of a resource queue, you can query the `pg_resqueue` catalog table. To see the runtime status of all resource queues, you can use the `pg_resqueue_status`. See [Checking Existing Resource Queues](../../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+
+`CREATE RESOURCE QUEUE` cannot be run within a transaction.
+
+To see the status of a resource queue, see [Checking Existing Resource Queues](../../resourcemgmt/ResourceQueues.html#topic_lqy_gls_zt).
+
+## <a id="topic1__section6"></a>Examples
+
+Create a resource queue as a child of `pg_root` with an active query limit of 20 and memory and core limits of 50%:
+
+``` pre
+CREATE RESOURCE QUEUE myqueue WITH (PARENT='pg_root', ACTIVE_STATEMENTS=20,
+MEMORY_LIMIT_CLUSTER=50%, CORE_LIMIT_CLUSTER=50%);
+```
+
+Create a resource queue as a child of pg\_root with memory and CPU limits and a resource overcommit factor:
+
+``` pre
+CREATE RESOURCE QUEUE test_queue_1 WITH (PARENT='pg_root', 
+MEMORY_LIMIT_CLUSTER=50%, CORE_LIMIT_CLUSTER=50%, RESOURCE_OVERCOMMIT_FACTOR=2);
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE RESOURCE QUEUE` is a HAWQ extension. There is no provision for resource queues or workload management in the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[ALTER RESOURCE QUEUE](ALTER-RESOURCE-QUEUE.html)[ALTER ROLE](ALTER-ROLE.html), [CREATE ROLE](CREATE-ROLE.html), [DROP RESOURCE QUEUE](DROP-RESOURCE-QUEUE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-ROLE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-ROLE.html.md.erb b/markdown/reference/sql/CREATE-ROLE.html.md.erb
new file mode 100644
index 0000000..ec7ac7c
--- /dev/null
+++ b/markdown/reference/sql/CREATE-ROLE.html.md.erb
@@ -0,0 +1,196 @@
+---
+title: CREATE ROLE
+---
+
+Defines a new database role (user or group).
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE ROLE <name> [[WITH] <option> [ ... ]]
+```
+
+where \<option\> can be:
+
+``` pre
+      SUPERUSER | NOSUPERUSER
+    | CREATEDB | NOCREATEDB
+    | CREATEROLE | NOCREATEROLE
+����| CREATEEXTTABLE | NOCREATEEXTTABLE
+������[ ( <attribute>='<value>'[, ...] ) ]
+�����������where attribute and value are:
+�����������type='readable'|'writable'
+�����������protocol='gpfdist'|'http'
+    | INHERIT | NOINHERIT
+    | LOGIN | NOLOGIN
+    | CONNECTION LIMIT <connlimit>
+    | [ ENCRYPTED | UNENCRYPTED ] PASSWORD '<password>'
+    | VALID UNTIL '<timestamp>'
+    | IN ROLE <rolename> [, ...]
+    | ROLE <rolename> [, ...]
+    | ADMIN <rolename> [, ...]
+����| RESOURCE QUEUE <queue_name>
+����| [ DENY <deny_point> ]
+����| [ DENY BETWEEN <deny_point> AND <deny_point>]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE ROLE` adds a new role to a HAWQ system. A role is an entity that can own database objects and have database privileges. A role can be considered a user, a group, or both depending on how it is used. You must have `CREATEROLE` privilege or be a database superuser to use this command.
+
+Note that roles are defined at the system-level and are valid for all databases in your HAWQ system.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<name\>  </dt>
+<dd>The name of the new role.</dd>
+
+<dt>SUPERUSER,  
+NOSUPERUSER  </dt>
+<dd>If `SUPERUSER` is specified, the role being defined will be a superuser, who can override all access restrictions within the database. Superuser status is dangerous and should be used only when really needed. You must yourself be a superuser to create a new superuser. `NOSUPERUSER` is the default.</dd>
+
+<dt>CREATEDB,  
+NOCREATEDB  </dt>
+<dd>If `CREATEDB` is specified, the role being defined will be allowed to create new databases. `NOCREATEDB` (the default) will deny a role the ability to create databases.</dd>
+
+<dt>CREATEROLE,  
+NOCREATEROLE  </dt>
+<dd>If `CREATEDB` is specified, the role being defined will be allowed to create new roles, alter other roles, and drop other roles. `NOCREATEROLE` (the default) will deny a role the ability to create roles or modify roles other than their own.</dd>
+
+<dt>CREATEEXTTABLE,  
+NOCREATEEXTTABLE  </dt>
+<dd>If `CREATEEXTTABLE` is specified, the role being defined is allowed to create external tables. The default \<type\> is `readable` and the default `protocol` is `gpfdist` if not specified. `NOCREATEEXTTABLE` (the default) denies the role the ability to create external tables. Using the `file` protocol when creating external tables is not supported. This is because HAWQ cannot guarantee scheduling executors on a specific host. Likewise, you cannot use the `EXECUTE` command with `ON                      ALL` and `ON HOST` for the same reason. Use the `ON MASTER/number/SEGMENT segment_id` to specify which segment instances are to execute the command.</dd>
+
+<dt>INHERIT,  
+NOINHERIT  </dt>
+<dd>If specified, `INHERIT` (the default) allows the role to use whatever database privileges have been granted to all roles it is directly or indirectly a member of. With `NOINHERIT`, membership in another role only grants the ability to `SET ROLE` to that other role.</dd>
+
+<dt>LOGIN,  
+NOLOGIN  </dt>
+<dd>If specified, `LOGIN` allows a role to log in to a database. A role having the `LOGIN` attribute can be thought of as a user. Roles with `NOLOGIN` (the default) are useful for managing database privileges, and can be thought of as groups.</dd>
+
+<dt>CONNECTION LIMIT \<connlimit\>  </dt>
+<dd>The number maximum of concurrent connections this role can make. The default of -1 means there is no limitation.</dd>
+
+<!-- -->
+
+<dt>PASSWORD \<password\>  </dt>
+<dd>Sets the user password for roles with the `LOGIN` attribute. If you do not plan to use password authentication you can omit this option. If no \<password\> is specified, the password will be set to null and password authentication will always fail for that user. A null \<password\> can optionally be written explicitly as `PASSWORD NULL`.</dd>
+
+<dt>ENCRYPTED,  
+UNENCRYPTED  </dt>
+<dd>These key words control whether the password is stored encrypted in the system catalogs. (If neither is specified, the default behavior is determined by the configuration parameter `password_encryption`.) If the presented password string is already in MD5-encrypted format, then it is stored encrypted as-is, regardless of whether `ENCRYPTED` or `UNENCRYPTED` is specified (since the system cannot decrypt the specified encrypted password string). This allows reloading of encrypted passwords during dump/restore.
+
+Note that older clients may lack support for the MD5 authentication mechanism that is needed to work with passwords that are stored encrypted.</dd>
+
+<dt>VALID UNTIL '\<timestamp\>'  </dt>
+<dd>The VALID UNTIL clause sets a date and time after which the role's password is no longer valid. If this clause is omitted the password will never expire.</dd>
+
+<dt>IN ROLE \<rolename\>  </dt>
+<dd>Adds the new role as a member of the named roles. Note that there is no option to add the new role as an administrator; use a separate `GRANT` command to do that.</dd>
+
+<dt>ROLE \<rolename\>  </dt>
+<dd>Adds the named roles as members of this role, making this new role a group.</dd>
+
+<dt>ADMIN \<rolename\>  </dt>
+<dd>The `ADMIN` clause is like `ROLE`, but the named roles are added to the new role `WITH ADMIN OPTION`, giving them the right to grant membership in this role to others.</dd>
+
+<dt>RESOURCE QUEUE \<queue\_name\>  </dt>
+<dd>The name of the resource queue to which the new user-level role is to be assigned. Only roles with `LOGIN` privilege can be assigned to a resource queue. The special keyword `NONE` means that the role is assigned to the default resource queue. A role can only belong to one resource queue.</dd>
+
+<dt>DENY \<deny\_point\>,  
+DENY BETWEEN \<deny\_point\> AND \<deny\_point\>   </dt>
+<dd>The `DENY` and `DENY BETWEEN` keywords set time-based constraints that are enforced at login. `DENY` sets a day or a day and time to deny access. `DENY BETWEEN` sets an interval during which access is denied. Both use the parameter \<deny\_point\> that has the following format:
+
+``` pre
+DAY <day> [ TIME '<time>' ]
+```
+
+The two parts of the \<deny_point\> parameter use the following formats:
+
+For \<day\>:
+
+``` pre
+{'Sunday' | 'Monday' | 'Tuesday' |'Wednesday' | 'Thursday' | 'Friday' |
+'Saturday' | 0-6 }
+```
+
+For \<time\>:
+
+``` pre
+{ 00-23 : 00-59 | 01-12 : 00-59 { AM | PM }}
+```
+
+The `DENY BETWEEN` clause uses two \<deny\_point\> parameters:
+
+``` pre
+DENY BETWEEN deny_point AND deny_point
+
+```
+</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+The preferred way to add and remove role members (manage groups) is to use [GRANT](GRANT.html) and [REVOKE](REVOKE.html).
+
+The `VALID UNTIL` clause defines an expiration time for a password only, not for the role. The expiration time is not enforced when logging in using a non-password-based authentication method.
+
+The `INHERIT` attribute governs inheritance of grantable privileges (access privileges for database objects and role memberships). It does not apply to the special role attributes set by `CREATE ROLE` and `ALTER                ROLE`. For example, being a member of a role with `CREATEDB` privilege does not immediately grant the ability to create databases, even if `INHERIT` is set.
+
+The `INHERIT` attribute is the default for reasons of backwards compatibility. In prior releases of HAWQ, users always had access to all privileges of groups they were members of. However, `NOINHERIT` provides a closer match to the semantics specified in the SQL standard.
+
+Be careful with the `CREATEROLE` privilege. There is no concept of inheritance for the privileges of a `CREATEROLE`-role. That means that even if a role does not have a certain privilege but is allowed to create other roles, it can easily create another role with different privileges than its own (except for creating roles with superuser privileges). For example, if a role has the `CREATEROLE` privilege but not the `CREATEDB` privilege, it can create a new role with the `CREATEDB` privilege. Therefore, regard roles that have the `CREATEROLE` privilege as almost-superuser-roles.
+
+The `CONNECTION LIMIT` option is never enforced for superusers.
+
+Caution must be exercised when specifying an unencrypted password with this command. The password will be transmitted to the server in clear-text, and it might also be logged in the client's command history or the server log. The client program `createuser`, however, transmits the password encrypted. Also, psql contains a command `\password` that can be used to safely change the password later.
+
+## <a id="topic1__section6"></a>Examples
+
+Create a role that can log in, but don't give it a password:
+
+``` pre
+CREATE ROLE jonathan LOGIN;
+```
+
+Create a role that belongs to a resource queue:
+
+``` pre
+CREATE ROLE jonathan LOGIN RESOURCE QUEUE poweruser;
+```
+
+Create a role with a password that is valid until the end of 2009 (`CREATE                USER` is the same as `CREATE ROLE` except that it implies `LOGIN`):
+
+``` pre
+CREATE USER joelle WITH PASSWORD 'jw8s0F4' VALID UNTIL '2010-01-01';
+```
+
+Create a role that can create databases and manage other roles:
+
+``` pre
+CREATE ROLE admin WITH CREATEDB CREATEROLE;
+```
+
+Create a role that does not allow login access on Sundays:
+
+``` pre
+CREATE ROLE user3 DENY DAY 'Sunday';
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The SQL standard defines the concepts of users and roles, but it regards them as distinct concepts and leaves all commands defining users to be specified by the database implementation. In HAWQ, users and roles are unified into a single type of object. Roles therefore have many more optional attributes than they do in the standard.
+
+`CREATE ROLE` is in the SQL standard, but the standard only requires the syntax:
+
+``` pre
+CREATE ROLE <name> [WITH ADMIN <rolename>]
+```
+
+Allowing multiple initial administrators, and all the other options of `CREATE ROLE`, are HAWQ extensions.
+
+The behavior specified by the SQL standard is most closely approximated by giving users the `NOINHERIT` attribute, while roles are given the `INHERIT` attribute.
+
+## <a id="topic1__section8"></a>See Also
+
+[SET ROLE](SET-ROLE.html), [ALTER ROLE](ALTER-ROLE.html), [DROP ROLE](DROP-ROLE.html), [GRANT](GRANT.html), [REVOKE](REVOKE.html), [CREATE RESOURCE QUEUE](CREATE-RESOURCE-QUEUE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-SCHEMA.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-SCHEMA.html.md.erb b/markdown/reference/sql/CREATE-SCHEMA.html.md.erb
new file mode 100644
index 0000000..f24e0cc
--- /dev/null
+++ b/markdown/reference/sql/CREATE-SCHEMA.html.md.erb
@@ -0,0 +1,63 @@
+---
+title: CREATE SCHEMA
+---
+
+Defines a new schema.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE SCHEMA <schema_name> [AUTHORIZATION <username>] 
+   [<schema_element> [ ... ]]
+
+CREATE SCHEMA AUTHORIZATION <rolename> [<schema_element> [ ... ]]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE SCHEMA` enters a new schema into the current database. The schema name must be distinct from the name of any existing schema in the current database.
+
+A schema is essentially a namespace: it contains named objects (tables, data types, functions, and operators) whose names may duplicate those of other objects existing in other schemas. Named objects are accessed either by qualifying their names with the schema name as a prefix, or by setting a search path that includes the desired schema(s). A `CREATE` command specifying an unqualified object name creates the object in the current schema (the one at the front of the search path, which can be determined with the function `current_schema`).
+
+Optionally, `CREATE SCHEMA` can include subcommands to create objects within the new schema. The subcommands are treated essentially the same as separate commands issued after creating the schema, except that if the `AUTHORIZATION` clause is used, all the created objects will be owned by that role.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<schema\_name\>   </dt>
+<dd>The name of a schema to be created. If this is omitted, the user name is used as the schema name. The name cannot begin with `pg_`, as such names are reserved for system catalog schemas.</dd>
+
+<dt> \<rolename\>   </dt>
+<dd>The name of the role who will own the schema. If omitted, defaults to the role executing the command. Only superusers may create schemas owned by roles other than themselves.</dd>
+
+<dt> \<schema\_element\>   </dt>
+<dd>An SQL statement defining an object to be created within the schema. Currently, only `CREATE TABLE`, `CREATE VIEW`, `CREATE               INDEX`, `CREATE SEQUENCE`, and `GRANT` are accepted as clauses within `CREATE SCHEMA`. Other kinds of objects may be created in separate commands after the schema is created.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+To create a schema, the invoking user must have the `CREATE` privilege for the current database or be a superuser.
+
+## <a id="topic1__section6"></a>Examples
+
+Create a schema:
+
+``` pre
+CREATE SCHEMA myschema;
+```
+
+Create a schema for role `joe` (the schema will also be named `joe`):
+
+``` pre
+CREATE SCHEMA AUTHORIZATION joe;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The SQL standard allows a `DEFAULT CHARACTER SET` clause in `CREATE           SCHEMA`, as well as more subcommand types than are presently accepted by HAWQ.
+
+The SQL standard specifies that the subcommands in `CREATE SCHEMA` may appear in any order. The present HAWQ implementation does not handle all cases of forward references in subcommands; it may sometimes be necessary to reorder the subcommands in order to avoid forward references.
+
+According to the SQL standard, the owner of a schema always owns all objects within it. HAWQ allows schemas to contain objects owned by users other than the schema owner. This can happen only if the schema owner grants the `CREATE` privilege on the schema to someone else.
+
+## <a id="topic1__section8"></a>See Also
+
+[DROP SCHEMA](DROP-SCHEMA.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-SEQUENCE.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-SEQUENCE.html.md.erb b/markdown/reference/sql/CREATE-SEQUENCE.html.md.erb
new file mode 100644
index 0000000..b2557c6
--- /dev/null
+++ b/markdown/reference/sql/CREATE-SEQUENCE.html.md.erb
@@ -0,0 +1,135 @@
+---
+title: CREATE SEQUENCE
+---
+
+Defines a new sequence generator.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [TEMPORARY | TEMP] SEQUENCE <name>
+�������[INCREMENT [BY] <value>]
+�������[MINVALUE <minvalue> | NO MINVALUE]
+�������[MAXVALUE <maxvalue> | NO MAXVALUE]
+�������[START [ WITH ] <start>]
+�������[CACHE <cache>]
+�������[[NO] CYCLE]
+�������[OWNED BY { <table>.<column> | NONE }]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE SEQUENCE` creates a new sequence number generator. This involves creating and initializing a new special single-row table. The generator will be owned by the user issuing the command.
+
+If a schema name is given, then the sequence is created in the specified schema. Otherwise it is created in the current schema. Temporary sequences exist in a special schema, so a schema name may not be given when creating a temporary sequence. The sequence name must be distinct from the name of any other sequence, table, or view in the same schema.
+
+After a sequence is created, you use the `nextval` function to operate on the sequence. For example, to insert a row into a table that gets the next value of a sequence:
+
+``` pre
+INSERT INTO distributors VALUES (nextval('myserial'), 'acme');
+```
+
+You can also use the function `setval` to operate on a sequence, but only for queries that do not operate on distributed data. For example, the following query is allowed because it resets the sequence counter value for the sequence generator process on the master:
+
+``` pre
+SELECT setval('myserial', 201);
+```
+
+But the following query will be rejected in HAWQ because it operates on distributed data:
+
+``` pre
+INSERT INTO product VALUES (setval('myserial', 201), 'gizmo');
+```
+
+In a regular (non-distributed) database, functions that operate on the sequence go to the local sequence table to get values as they are needed. In HAWQ, however, keep in mind that each segment is its own distinct database process. Therefore the segments need a single point of truth to go for sequence values so that all segments get incremented correctly and the sequence moves forward in the right order. A sequence server process runs on the master and is the point-of-truth for a sequence in a HAWQ distributed database. Segments get sequence values at runtime from the master.
+
+Because of this distributed sequence design, there are some limitations on the functions that operate on a sequence in HAWQ:
+
+-   `lastval` and `currval` functions are not supported.
+-   `setval` can only be used to set the value of the sequence generator on the master, it cannot be used in subqueries to update records on distributed table data.
+-   `nextval` sometimes grabs a block of values from the master for a segment to use, depending on the query. So values may sometimes be skipped in the sequence if all of the block turns out not to be needed at the segment level. Note that a regular PostgreSQL database does this too, so this is not something unique to HAWQ.
+
+Although you cannot update a sequence directly, you can use a query like:
+
+``` pre
+SELECT * FROM <sequence_name>;
+```
+
+to examine the parameters and current state of a sequence. In particular, the `last_value` field of the sequence shows the last value allocated by any session.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>TEMPORARY | TEMP  </dt>
+<dd>If specified, the sequence object is created only for this session, and is automatically dropped on session exit. Existing permanent sequences with the same name are not visible (in this session) while the temporary sequence exists, unless they are referenced with schema-qualified names.</dd>
+
+<dt> \<name\>  </dt>
+<dd>The name (optionally schema-qualified) of the sequence to be created.</dd>
+
+<dt> \<increment\>  </dt>
+<dd>Specifies which value is added to the current sequence value to create a new value. A positive value will make an ascending sequence, a negative one a descending sequence. The default value is 1.</dd>
+
+<dt> \<minvalue\>  
+NO MINVALUE  </dt>
+<dd>Determines the minimum value a sequence can generate. If this clause is not supplied or `NO MINVALUE` is specified, then defaults will be used. The defaults are 1 and -263-1 for ascending and descending sequences, respectively.</dd>
+
+<dt> \<maxvalue\>  
+NO MAXVALUE  </dt>
+<dd>Determines the maximum value for the sequence. If this clause is not supplied or `NO MAXVALUE` is specified, then default values will be used. The defaults are 263-1 and -1 for ascending and descending sequences, respectively.</dd>
+
+<dt> \<start\>  </dt>
+<dd>Allows the sequence to begin anywhere. The default starting value is \<minvalue\> for ascending sequences and \<maxvalue\> for descending ones.</dd>
+
+<dt> \<cache\>  </dt>
+<dd>Specifies how many sequence numbers are to be preallocated and stored in memory for faster access. The minimum (and default) value is 1 (no cache).</dd>
+
+<dt>CYCLE  
+NO CYCLE  </dt>
+<dd>Allows the sequence to wrap around when the \<maxvalue\> (for ascending) or \<minvalue\> (for descending) has been reached. If the limit is reached, the next number generated will be the \<minvalue\> (for ascending) or \<maxvalue\> (for descending). If `NO CYCLE` is specified, any calls to `nextval` after the sequence has reached its maximum value will return an error. If not specified, `NO CYCLE` is the default.</dd>
+
+<dt>OWNED BY \<table\>.\<column\>  
+OWNED BY NONE  </dt>
+<dd>Causes the sequence to be associated with a specific table column, such that if that column (or its whole table) is dropped, the sequence will be automatically dropped as well. The specified table must have the same owner and be in the same schema as the sequence. `OWNED BY NONE`, the default, specifies that there is no such association.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Sequences are based on bigint arithmetic, so the range cannot exceed the range of an eight-byte integer (-9223372036854775808 to 9223372036854775807).
+
+Although multiple sessions are guaranteed to allocate distinct sequence values, the values may be generated out of sequence when all the sessions are considered. For example, session A might reserve values 1..10 and return `nextval=1`, then session B might reserve values 11..20 and return `nextval=11` before session A has generated nextval=2. Thus, you should only assume that the `nextval` values are all distinct, not that they are generated purely sequentially. Also,`last_value` will reflect the latest value reserved by any session, whether or not it has yet been returned by `nextval`.
+
+## <a id="topic1__section6"></a>Examples
+
+Create a sequence named `myseq`:
+
+``` pre
+CREATE SEQUENCE myseq START 101;
+```
+
+Insert a row into a table that gets the next value:
+
+``` pre
+INSERT INTO distributors VALUES (nextval('myseq'), 'acme');
+```
+
+Reset the sequence counter value on the master:
+
+``` pre
+SELECT setval('myseq', 201);
+```
+
+Illegal use of `setval` in HAWQ (setting sequence values on distributed data):
+
+``` pre
+INSERT INTO product VALUES (setval('myseq', 201), 'gizmo');
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE SEQUENCE` conforms to the SQL standard, with the following exceptions:
+
+-   The `AS data_type                ` expression specified in the SQL standard is not supported.
+-   Obtaining the next value is done using the `nextval()` function instead of the `NEXT VALUE FOR` expression specified in the SQL standard.
+-   The `OWNED BY` clause is a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[DROP SEQUENCE](DROP-SEQUENCE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/de1e2e07/markdown/reference/sql/CREATE-TABLE-AS.html.md.erb
----------------------------------------------------------------------
diff --git a/markdown/reference/sql/CREATE-TABLE-AS.html.md.erb b/markdown/reference/sql/CREATE-TABLE-AS.html.md.erb
new file mode 100644
index 0000000..1979af4
--- /dev/null
+++ b/markdown/reference/sql/CREATE-TABLE-AS.html.md.erb
@@ -0,0 +1,126 @@
+---
+title: CREATE TABLE AS
+---
+
+Defines a new table from the results of a query.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [ [GLOBAL | LOCAL] {TEMPORARY | TEMP} ] TABLE <table_name>
+���[(<column_name> [, ...] )]
+���[ WITH ( storage_parameter=<value> [, ... ] )
+���[ON COMMIT {PRESERVE ROWS | DELETE ROWS | DROP}]
+���[TABLESPACE <tablespace>]
+���AS <query>
+���[DISTRIBUTED BY (<column>, [ ... ] ) | DISTRIBUTED RANDOMLY]
+```
+
+where \<storage\_parameter\> is:
+
+``` pre
+���APPENDONLY={TRUE}
+���BLOCKSIZE={8192-2097152} 
+   bucketnum={<x>}
+���ORIENTATION={ROW | PARQUET}
+���COMPRESSTYPE={ZLIB | SNAPPY | GZIP | NONE}
+���COMPRESSLEVEL={0-9 | 1}
+���FILLFACTOR={10-100}
+���OIDS=[TRUE | FALSE]
+   PAGESIZE={1024-1073741823}
+   ROWGROUPSIZE={1024-1073741823}
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE TABLE AS` creates a table and fills it with data computed by a [SELECT](SELECT.html) command. The table columns have the names and data types associated with the output columns of the `SELECT`, however you can override the column names by giving an explicit list of new column names.
+
+`CREATE TABLE AS` creates a new table and evaluates the query just once to fill the new table initially. The new table will not track subsequent changes to the source tables of the query.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>GLOBAL | LOCAL  </dt>
+<dd>These keywords are present for SQL standard compatibility, but have no effect in HAWQ.</dd>
+
+<dt>TEMPORARY | TEMP  </dt>
+<dd>If specified, the new table is created as a temporary table. Temporary tables are automatically dropped at the end of a session, or optionally at the end of the current transaction (see `ON COMMIT`). Existing permanent tables with the same name are not visible to the current session while the temporary table exists, unless they are referenced with schema-qualified names. Any indexes created on a temporary table are automatically temporary as well.</dd>
+
+<dt> \<table\_name\>   </dt>
+<dd>The name (optionally schema-qualified) of the new table to be created.</dd>
+
+<dt> \<column\_name\>   </dt>
+<dd>The name of a column in the new table. If column names are not provided, they are taken from the output column names of the query. If the table is created from an `EXECUTE` command, a column name list cannot be specified.</dd>
+
+<dt>WITH (\<storage\_parameter\>=\<value\> )  </dt>
+<dd>The `WITH` clause can be used to set storage options for the table or its indexes. Note that you can also set different storage parameters on a particular partition or subpartition by declaring the `WITH` clause in the partition specification. The following storage options are available:
+
+**APPENDONLY** \u2014 Set to `TRUE` to create the table as an append-only table. If `FALSE`, an error message displays stating that heap tables are not supported.
+
+**BLOCKSIZE** \u2014 Set to the size, in bytes for each block in a table. The `BLOCKSIZE` must be between 8192 and 2097152 bytes, and be a multiple of 8192. The default is 32768.
+
+**bucketnum** \u2014 Set to the number of hash buckets to be used in creating a hash-distributed table. If changing the number of hash buckets, use `WITH` to specify `bucketnum` in creating a hash-distributed table. If distribution is specified by column, the table will inherit the value.
+
+**ORIENTATION** \u2014 Set to `row` (the default) for row-oriented storage, or `parquet`. This option is only valid if `APPENDONLY=TRUE`. Heap-storage tables can only be row-oriented.
+
+**COMPRESSTYPE** \u2014 Set to `ZLIB`, `SNAPPY`, or `GZIP` to specify the type of compression used. ZLIB provides more compact compression ratios at lower speeds. Parquet tables support `SNAPPY` and `GZIP` compression. Append-only tables support `SNAPPY` and `ZLIB` compression.  This option is valid only if `APPENDONLY=TRUE`.
+
+**COMPRESSLEVEL** \u2014 Set to an integer value from 1 (fastest compression) to 9 (highest compression ratio). If not declared, the default is 1. This option is valid only if `APPENDONLY=TRUE` and `COMPRESSTYPE=[ZLIB|GZIP]`.
+
+**OIDS** \u2014 Set to `OIDS=FALSE` (the default) so that rows do not have object identifiers assigned to them. Do not enable OIDS when creating a table. On large tables, such as those in a typical HAWQ system, using OIDs for table rows can cause wrap-around of the 32-bit OID counter. Once the counter wraps around, OIDs can no longer be assumed to be unique, which not only makes them useless to user applications, but can also cause problems in the HAWQ system catalog tables. In addition, excluding OIDs from a table reduces the space required to store the table on disk by 4 bytes per row, slightly improving performance.</dd>
+
+<dt>ON COMMIT  </dt>
+<dd>The behavior of temporary tables at the end of a transaction block can be controlled using `ON COMMIT`. The three options are:
+
+**PRESERVE ROWS** \u2014 No special action is taken at the ends of transactions for temporary tables. This is the default behavior.
+
+**DELETE ROWS** \u2014 All rows in the temporary table will be deleted at the end of each transaction block. Essentially, an automatic `TRUNCATE` is done at each commit.
+
+**DROP** \u2014 The temporary table will be dropped at the end of the current transaction block.</dd>
+
+<dt>TABLESPACE \<tablespace\>   </dt>
+<dd>The tablespace is the name of the tablespace in which the new table is to be created. If not specified, the database's default tablespace is used.</dd>
+
+<dt>AS \<query\>   </dt>
+<dd>A [SELECT](SELECT.html) command, or an [EXECUTE](EXECUTE.html) command that runs a prepared `SELECT` query.</dd>
+
+<dt>DISTRIBUTED BY (\<column\>, \[ ... \] )  
+DISTRIBUTED RANDOMLY  </dt>
+<dd>Used to declare the HAWQ distribution policy for the table. The default is RANDOM distribution. `DISTIBUTED BY` can use hash distribution with one or more columns declared as the distribution key. If hash distribution is desired, it can be specified using `bucketnum` attribute, using the first eligible column of the table as the distribution key.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+This command is functionally similar to [SELECT INTO](SELECT-INTO.html), but it is preferred since it is less likely to be confused with other uses of the `SELECT INTO` syntax. Furthermore, `CREATE TABLE AS` offers a superset of the functionality offered by `SELECT INTO`.
+
+`CREATE TABLE AS` can be used for fast data loading from external table data sources. See [CREATE EXTERNAL TABLE](CREATE-EXTERNAL-TABLE.html).
+
+## <a id="topic1__section6"></a>Examples
+
+Create a new table `films_recent` consisting of only recent entries from the table `films`:
+
+``` pre
+CREATE TABLE films_recent AS SELECT * FROM films WHERE 
+date_prod >= '2007-01-01';
+```
+
+Create a new temporary table `films_recent`, consisting of only recent entries from the table films, using a prepared statement. The new table has OIDs and will be dropped at commit:
+
+``` pre
+PREPARE recentfilms(date) AS SELECT * FROM films WHERE 
+date_prod > $1;
+CREATE TEMP TABLE films_recent WITH (OIDS) ON COMMIT DROP AS 
+EXECUTE recentfilms('2007-01-01');
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE TABLE AS` conforms to the SQL standard, with the following exceptions:
+
+-   The standard requires parentheses around the subquery clause; in HAWQ, these parentheses are optional.
+-   The standard defines a `WITH [NO] DATA` clause; this is not currently implemented by HAWQ. The behavior provided by HAWQ is equivalent to the standard's `WITH DATA` case. `WITH NO DATA` can be simulated by appending `LIMIT 0` to the query.
+-   HAWQ handles temporary tables differently from the standard; see `CREATE TABLE` for details.
+-   The `WITH` clause is a HAWQ extension; neither storage parameters nor `OIDs` are in the standard.
+-   The HAWQ concept of tablespaces is not part of the standard. The `TABLESPACE` clause is an extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE EXTERNAL TABLE](CREATE-EXTERNAL-TABLE.html), [EXECUTE](EXECUTE.html), [SELECT](SELECT.html), [SELECT INTO](SELECT-INTO.html)