You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hawq.apache.org by yo...@apache.org on 2016/08/29 16:46:39 UTC

[04/36] incubator-hawq-docs git commit: moving book configuration to new 'book' branch, for HAWQ-1027

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CREATE-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-TABLE.html.md.erb b/reference/sql/CREATE-TABLE.html.md.erb
new file mode 100644
index 0000000..99ff35e
--- /dev/null
+++ b/reference/sql/CREATE-TABLE.html.md.erb
@@ -0,0 +1,455 @@
+---
+title: CREATE TABLE
+---
+
+Defines a new table.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [[GLOBAL | LOCAL] {TEMPORARY | TEMP}] TABLE <table_name> (
+[ { <column_name> <data_type> [ DEFAULT <default_expr> ]
+���[<column_constraint> [ ... ]
+[ ENCODING ( <storage_directive> [,...] ) ]
+]
+���| <table_constraint>
+���| LIKE <other_table> [{INCLUDING | EXCLUDING}
+����������������������{DEFAULTS | CONSTRAINTS}] ...} ]
+���[, ... ] ]
+   [<column_reference_storage_directive> [, \u2026] ]
+���)
+���[ INHERITS ( <parent_table> [, ... ] ) ]
+���[ WITH ( <storage_parameter>=<value> [, ... ] )
+���[ ON COMMIT {PRESERVE ROWS | DELETE ROWS | DROP} ]
+���[ TABLESPACE <tablespace> ]
+���[ DISTRIBUTED BY (<column>, [ ... ] ) | DISTRIBUTED RANDOMLY ]
+���[ PARTITION BY <partition_type> (<column>)
+�������[ SUBPARTITION BY <partition_type> (<column>) ]
+����������[ SUBPARTITION TEMPLATE ( <template_spec> ) ]
+�������[...]
+����( <partition_spec> )
+��������| [ SUBPARTITION BY partition_type (<column>) ]
+����������[...]
+����( <partition_spec>
+������[ ( <subpartition_spec>
+�����������[(...)]
+���������) ]
+����)
+```
+
+where \<column\_constraint\> is:
+
+``` pre
+���[CONSTRAINT <constraint_name>]
+���NOT NULL | NULL
+���| CHECK ( <expression> )
+```
+
+where \<storage\_directive\> for a column is:
+
+``` pre
+   COMPRESSTYPE={ZLIB | SNAPPY | GZIP | NONE}
+ | COMPRESSLEVEL={0-9}
+ | BLOCKSIZE={8192-2097152}
+```
+
+where \<storage\_parameter\> for a table is:
+
+``` pre
+���APPENDONLY={TRUE}
+���BLOCKSIZE={8192-2097152}
+   bucketnum={<x>}
+���ORIENTATION={ROW | PARQUET}
+�  COMPRESSTYPE={ZLIB | SNAPPY | GZIP | NONE}
+���COMPRESSLEVEL={0-9}
+���FILLFACTOR={10-100}
+���OIDS=[TRUE|FALSE]
+   PAGESIZE={1024-1073741823}
+   ROWGROUPSIZE={1024-1073741823}
+```
+
+and \<table\_constraint\> is:
+
+``` pre
+���[CONSTRAINT <constraint_name>]
+���| CHECK ( <expression> )
+```
+
+where \<partition\_type\> is:
+
+``` pre
+����LIST��| RANGE
+```
+
+where \<partition\_specification\> is:
+
+``` pre
+            <partition_element> [, ...]
+```
+
+and \<partition\_element\> is:
+
+``` pre
+���DEFAULT PARTITION <name>
+��| [PARTITION <name>] VALUES (<list_value> [,...] )
+��| [PARTITION <name>]
+�����START ([<datatype>] '<start_value>') [INCLUSIVE | EXCLUSIVE]
+�����[ END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE] ]
+�����[ EVERY ([<datatype>] [<number> | INTERVAL] '<interval_value>') ]
+��| [PARTITION <name>]
+�����END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE]
+�����[ EVERY ([<datatype>] [<number> | INTERVAL] '<interval_value>') ]
+[ WITH ( <partition_storage_parameter>=<value> [, ... ] ) ]
+[<column_reference_storage_directive> [, \u2026] ]
+[ TABLESPACE <tablespace> ]
+```
+
+where \<subpartition\_spec\> or \<template\_spec\> is:
+
+``` pre
+            <subpartition_element> [, ...]
+```
+
+and \<subpartition\_element\> is:
+
+``` pre
+���DEFAULT SUBPARTITION <name>
+��| [SUBPARTITION <name>] VALUES (<list_value> [,...] )
+��| [SUBPARTITION <name>]
+�����START ([<datatype>] '<start_value>') [INCLUSIVE | EXCLUSIVE]
+�����[ END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE] ]
+�����[ EVERY ([<datatype>] [<number> | INTERVAL] '<interval_value>') ]
+��| [SUBPARTITION <name>]
+�����END ([<datatype>] '<end_value>') [INCLUSIVE | EXCLUSIVE]
+�����[ EVERY ([<datatype>] [<number> | INTERVAL] '<interval_value>') ]
+[ WITH ( <partition_storage_parameter>=<value> [, ... ] ) ]
+[<column_reference_storage_directive> [, \u2026] ]
+[ TABLESPACE <tablespace> ]
+```
+
+where \<storage\_directive\> is:
+
+``` pre
+   COMPRESSTYPE={ZLIB | SNAPPY | GZIP | NONE}
+ | COMPRESSLEVEL={0-9}
+ | BLOCKSIZE={8192-2097152}
+```
+
+where \<column\_reference\_storage\_directive\> is:
+
+``` pre
+   COLUMN column_name ENCODING (<storage_directive> [, ... ] ), ...
+ |
+   DEFAULT COLUMN ENCODING (<storage_directive> [, ... ] )
+```
+
+where \<storage\_parameter\> for a partition is:
+
+``` pre
+���APPENDONLY={TRUE}
+���BLOCKSIZE={8192-2097152}
+���ORIENTATION={ROW | PARQUET}
+�  COMPRESSTYPE={ZLIB | SNAPPY | GZIP | NONE}
+���COMPRESSLEVEL={0-9}
+���FILLFACTOR={10-100}
+���OIDS=[TRUE|FALSE]
+   PAGESIZE={1024-1073741823}
+   ROWGROUPSIZE={1024-1073741823}
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE TABLE` creates a new, initially empty table in the current database. The table is owned by the user issuing the command. If a schema name is given then the table is created in the specified schema. Otherwise it is created in the current schema. Temporary tables exist in a special schema, so a schema name may not be given when creating a temporary table. The name of the table must be distinct from the name of any other table, external table, sequence, or view in the same schema.
+
+The optional constraint clauses specify conditions that new or updated rows must satisfy for an insert or update operation to succeed. A constraint is an SQL object that helps define the set of valid values in the table in various ways. Constraints apply to tables, not to partitions. You cannot add a constraint to a partition or subpartition.
+
+There are two ways to define constraints: table constraints and column constraints. A column constraint is defined as part of a column definition. A table constraint definition is not tied to a particular column, and it can encompass more than one column. Every column constraint can also be written as a table constraint; a column constraint is only a notational convenience for use when the constraint only affects one column.
+
+When creating a table, there is an additional clause to declare the HAWQ distribution policy. If a `DISTRIBUTED BY` clause is not supplied, HAWQ assigns a `RANDOM` distribution policy to the table, where the rows are distributed based on a round-robin or random distribution. You can also choose to distribute data with a hash-based policy, where the `bucketnum` attribute sets the number of hash buckets used by a hash-distributed table. Columns of geometric or user-defined data types are not eligible as HAWQ distribution key columns. The number of buckets affects how many virtual segments will be used in processing.
+
+By default, a HASH distributed table is created with the number of hash buckets specified by the parameter \<default\_hash\_table\_bucket\_number\>. This can be changed in session level or in the create table DDL with `bucketnum` storage parameter.
+
+**Note:** Column-oriented tables are no longer supported. Use Parquet tables for HAWQ internal tables.
+
+The `PARTITION BY` clause allows you to divide the table into multiple sub-tables (or parts) that, taken together, make up the parent table and share its schema. Though the sub-tables exist as independent tables, HAWQ restricts their use in important ways. Internally, partitioning is implemented as a special form of inheritance. Each child table partition is created with a distinct `CHECK` constraint which limits the data the table can contain, based on some defining criteria. The `CHECK` constraints are also used by the query planner to determine which table partitions to scan in order to satisfy a given query predicate. These partition constraints are managed automatically by HAWQ.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>GLOBAL | LOCAL  </dt>
+<dd>These keywords are present for SQL standard compatibility, but have no effect in HAWQ.</dd>
+
+<dt>TEMPORARY | TEMP  </dt>
+<dd>If specified, the table is created as a temporary table. Temporary tables are automatically dropped at the end of a session, or optionally at the end of the current transaction (see `ON COMMIT`). Existing permanent tables with the same name are not visible to the current session while the temporary table exists, unless they are referenced with schema-qualified names. Any indexes created on a temporary table are automatically temporary as well.</dd>
+
+<dt> \<table\_name\>  </dt>
+<dd>The name (optionally schema-qualified) of the table to be created.</dd>
+
+<dt> \<column\_name\>  </dt>
+<dd>The name of a column to be created in the new table.</dd>
+
+<dt> \<data\_type\>  </dt>
+<dd>The data type of the column. This may include array specifiers.</dd>
+
+<dt>DEFAULT \<default\_expr\>  </dt>
+<dd>The `DEFAULT` clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (subqueries and cross-references to other columns in the current table are not allowed). The data type of the default expression must match the data type of the column. The default expression will be used in any insert operation that does not specify a value for the column. If there is no default for a column, then the default is null.</dd>
+
+<dt>INHERITS  </dt>
+<dd>The optional `INHERITS` clause specifies a list of tables from which the new table automatically inherits all columns. Use of `INHERITS` creates a persistent relationship between the new child table and its parent table(s). Schema modifications to the parent(s) normally propagate to children as well, and by default the data of the child table is included in scans of the parent(s).
+
+In HAWQ, the `INHERITS` clause is not used when creating partitioned tables. Although the concept of inheritance is used in partition hierarchies, the inheritance structure of a partitioned table is created using the PARTITION BY clause.
+
+If the same column name exists in more than one parent table, an error is reported unless the data types of the columns match in each of the parent tables. If there is no conflict, then the duplicate columns are merged to form a single column in the new table. If the column name list of the new table contains a column name that is also inherited, the data type must likewise match the inherited column(s), and the column definitions are merged into one. However, inherited and new column declarations of the same name need not specify identical constraints: all constraints provided from any declaration are merged together and all are applied to the new table. If the new table explicitly specifies a default value for the column, this default overrides any defaults from inherited declarations of the column. Otherwise, any parents that specify default values for the column must all specify the same default, or an error will be reported.</dd>
+
+<dt>LIKE \<other\_table\> \[{INCLUDING | EXCLUDING} {DEFAULTS | CONSTRAINTS}\]  </dt>
+<dd>The `LIKE` clause specifies a table from which the new table automatically copies all column names, data types, not-null constraints, and distribution policy. Storage properties like append-only or partition structure are not copied. Unlike `INHERITS`, the new table and original table are completely decoupled after creation is complete.
+
+Default expressions for the copied column definitions will only be copied if `INCLUDING DEFAULTS` is specified. The default behavior is to exclude default expressions, resulting in the copied columns in the new table having null defaults.
+
+Not-null constraints are always copied to the new table. `CHECK` constraints will only be copied if `INCLUDING CONSTRAINTS` is specified; other types of constraints will *never* be copied. Also, no distinction is made between column constraints and table constraints \u2014 when constraints are requested, all check constraints are copied.
+
+Note also that unlike `INHERITS`, copied columns and constraints are not merged with similarly named columns and constraints. If the same name is specified explicitly or in another `LIKE` clause an error is signalled.</dd>
+
+<dt>NULL | NOT NULL  </dt>
+<dd>Specifies if the column is or is not allowed to contain null values. `NULL` is the default.</dd>
+
+<dt>CHECK ( \<expression\> )  </dt>
+<dd>The `CHECK` clause specifies an expression producing a Boolean result which new or updated rows must satisfy for an insert or update operation to succeed. Expressions evaluating to `TRUE` or `UNKNOWN` succeed. Should any row of an insert or update operation produce a `FALSE` result an error exception is raised and the insert or update does not alter the database. A check constraint specified as a column constraint should reference that column's value only, while an expression appearing in a table constraint may reference multiple columns. `CHECK` expressions cannot contain subqueries nor refer to variables other than columns of the current row.</dd>
+
+<dt>WITH ( \<storage\_option\>=\<value\> )  </dt>
+<dd>The `WITH` clause can be used to set storage options for the table or its indexes. Note that you can also set storage parameters on a particular partition or subpartition by declaring the `WITH` clause in the partition specification.
+
+Note: You cannot create a table with both column encodings and compression parameters in a WITH clause.
+
+The following storage options are available:
+
+**APPENDONLY** \u2014 Set to `TRUE` to create the table as an append-only table. If `FALSE` is specified, an error message displays stating that heap tables are not supported.
+
+**BLOCKSIZE** \u2014 Set to the size, in bytes for each block in a table. The `BLOCKSIZE` must be between 8192 and 2097152 bytes, and be a multiple of 8192. The default is 32768.
+
+**bucketnum** \u2014 Set to the number of hash buckets to be used in creating a hash-distributed table, specified as an integer greater than 0 and no more than the value of `default_hash_table_bucket_number`. The default when the table is created is 6 times the segment count. However, explicitly setting the bucket number when creating a hash table is recommended.
+
+**ORIENTATION** \u2014 Set to `row` (the default) for row-oriented storage, or parquet. The parquet column-oriented format can be more efficient for large-scale queries. This option is only valid if `APPENDONLY=TRUE`. 
+
+**COMPRESSTYPE** \u2014 Set to `ZLIB`, `SNAPPY`, or `GZIP` to specify the type of compression used. `ZLIB` provides more compact compression ratios at lower speeds. Parquet tables support `SNAPPY` and `GZIP` compression. Append-only tables support `SNAPPY` and `ZLIB` compression.  This option is valid only if `APPENDONLY=TRUE`.
+
+**COMPRESSLEVEL** \u2014 Set to an integer value from 1 (fastest compression) to 9 (highest compression ratio). If not specified, the default is 1. This option is valid only if `APPENDONLY=TRUE` and `COMPRESSTYPE=[ZLIB|GZIP]`.
+
+**OIDS** \u2014 Set to `OIDS=FALSE` (the default) so that rows do not have object identifiers assigned to them. Do not enable OIDS when creating a table. On large tables, such as those in a typical HAWQ system, using OIDs for table rows can cause wrap-around of the 32-bit OID counter. Once the counter wraps around, OIDs can no longer be assumed to be unique, which not only makes them useless to user applications, but can also cause problems in the HAWQ system catalog tables. In addition, excluding OIDs from a table reduces the space required to store the table on disk by 4 bytes per row, slightly improving performance. OIDS are not allowed on partitioned tables.</dd>
+
+<dt>ON COMMIT  </dt>
+<dd>The behavior of temporary tables at the end of a transaction block can be controlled using `ON COMMIT`. The three options are:
+
+**PRESERVE ROWS** - No special action is taken at the ends of transactions for temporary tables. This is the default behavior.
+
+**DELETE ROWS** - All rows in the temporary table will be deleted at the end of each transaction block. Essentially, an automatic `TRUNCATE` is done at each commit.
+
+**DROP** - The temporary table will be dropped at the end of the current transaction block.</dd>
+
+<dt>TABLESPACE \<tablespace\>  </dt>
+<dd>The name of the tablespace in which the new table is to be created. If not specified, the database's default tablespace dfs\_default is used. Creating table on tablespace `pg_default` is not allowed.</dd>
+
+<dt>DISTRIBUTED BY (\<column\>, \[ ... \] )  
+DISTRIBUTED RANDOMLY  </dt>
+<dd>Used to declare the HAWQ distribution policy for the table. The default is RANDOM distribution. `DISTIBUTED BY` can use hash distribution with one or more columns declared as the distribution key. If hash distribution is desired, it must be specified using the first eligible column of the table as the distribution key.</dd>
+
+<dt>PARTITION BY  </dt>
+<dd>Declares one or more columns by which to partition the table.</dd>
+
+<dt> \<partition\_type\>  </dt>
+<dd>Declares partition type: `LIST` (list of values) or `RANGE` (a numeric or date range).</dd>
+
+<dt> \<partition\_specification\>  </dt>
+<dd>Declares the individual partitions to create. Each partition can be defined individually or, for range partitions, you can use the `EVERY` clause (with a `START` and optional `END` clause) to define an increment pattern to use to create the individual partitions.
+
+**`DEFAULT PARTITION \<name\>                      `** \u2014 Declares a default partition. When data does not match to an existing partition, it is inserted into the default partition. Partition designs that do not have a default partition will reject incoming rows that do not match to an existing partition.
+
+**`PARTITION \<name\>  `** \u2014 Declares a name to use for the partition. Partitions are created using the following naming convention: `                      parentname_level#_prt_givenname                   `.
+
+**`VALUES`** \u2014 For list partitions, defines the value(s) that the partition will contain.
+
+**`START`** \u2014 For range partitions, defines the starting range value for the partition. By default, start values are `INCLUSIVE`. For example, if you declared a start date of '`2008-01-01`', then the partition would contain all dates greater than or equal to '`2008-01-01`'. Typically the data type of the `START` expression is the same type as the partition key column. If that is not the case, then you must explicitly cast to the intended data type.
+
+**`END`** \u2014 For range partitions, defines the ending range value for the partition. By default, end values are `EXCLUSIVE`. For example, if you declared an end date of '`2008-02-01`', then the partition would contain all dates less than but not equal to '`2008-02-01`'. Typically the data type of the `END` expression is the same type as the partition key column. If that is not the case, then you must explicitly cast to the intended data type.
+
+**`EVERY`** \u2014 For range partitions, defines how to increment the values from `START` to `END` to create individual partitions. Typically the data type of the `EVERY` expression is the same type as the partition key column. If that is not the case, then you must explicitly cast to the intended data type.
+
+**`WITH`** \u2014 Sets the table storage options for a partition. For example, you may want older partitions to be append-only tables and newer partitions to be regular heap tables.
+
+**`TABLESPACE`** \u2014 The name of the tablespace in which the partition is to be created.</dd>
+
+<dt>SUBPARTITION BY  </dt>
+<dd>Declares one or more columns by which to subpartition the first-level partitions of the table. The format of the subpartition specification is similar to that of a partition specification described above.</dd>
+
+<dt>SUBPARTITION TEMPLATE  </dt>
+<dd>Instead of declaring each subpartition definition individually for each partition, you can optionally declare a subpartition template to be used to create the subpartitions. This subpartition specification would then apply to all parent partitions.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Using OIDs in new applications is not recommended. Avoid assuming that OIDs are unique across tables; if you need a database-wide unique identifier, use the combination of table OID and row OID for the purpose.
+
+Primary key and foreign key constraints are not supported in HAWQ. For inherited tables, table privileges *are not* inherited in the current implementation.
+
+HAWQ also supports the parquet columnar storage format. Parquet tables can be more efficient for increasing performance on large queries.
+
+## <a id="parquetset"></a>Setting Parameters for Parquet Tables
+
+You can set three kinds of parameters for a parquet table.
+
+1.  Set the parquet orientation parameter:
+
+    ``` pre
+    with (appendonly=true, orientation=parquet);
+    ```
+
+2.  Set the compression type parameter. Parquet tables can be compressed using either `SNAPPY` or `GZIP`. `GZIP`�supports compression level values between 1 and 9. `SNAPPY` does not support compression level; providing a compression level when using `SNAPPY` will cause the create table operation to fail. Specifying a compression level but no compression type when�creating a parquet table will default to `GZIP` compression.
+
+    **Note:**  For best performance with parquet storage, use `SNAPPY` compression. 
+
+3.  Set the data storage parameter: By default, the two parameters, `PAGESIZE` and `ROWGROUPSIZE`�are set to 1MB/8MB for common and partitioned tables.
+
+    **Note:** The page size should be less than the rowgroup size. This is because rowgroup includes the metadata information of a single page even for a single column table. The parameters `PAGESIZE` and `ROWGROUPSIZE` are valid for parquet tables, while `BLOCKSIZE` is valid for append-only tables
+
+## <a id="aboutparquet"></a>About Parquet Storage
+
+DDL and DML: Most DDL and DML operations are valid for a parquet table. The usage for DDL and DML�operations is similar to append-only tables. Valid operations on parquet tables include:
+
+-   Parquet table creation (with/without partition, with/without compression type)
+-   Insert and Select
+
+**Compression type and level**: You can only set the compression type at the table level. HAWQ does not�support setting column level compression. The specified compression type is propagated to the columns. All the columns�must have the same compress type and level.
+
+Using `SNAPPY` compression with parquet files is recommended for best performance.
+
+**Data type**: HAWQ supports all data types except arrays and user defined types.
+
+**Alter table**: HAWQ does not support adding a new column to an existing parquet table or dropping a column. You�can use `ALTER TABLE` for a partition operation.
+
+**FillFactor/OIDS/Checksum**: HAWQ does not support these operations when creating parquet tables. The�default value for checksum for a parquet table is false. You cannot set this value or specify fillfactor and oids.
+
+**Memory occupation**: When inserting or loading data to a parquet table, the whole rowgroup is stored in�physical memory until the size exceeds the threshold or the end of the�`INSERT` operation. Once either occurs, the entire rowgroup is flushed to disk. Also, at the beginning of�the `INSERT` operation, each column is pre-allocated a page buffer. The column pre-allocated page buffer�size should be `min(pageSizeLimit,                rowgroupSizeLimit/estimatedColumnWidth/estimatedRecordWidth)` for�the first rowgroup. For the following rowgroup, it should be `min(pageSizeLimit,                actualColumnChunkSize in last�rowgroup * 1.05)`, of which 1.05 is the estimated scaling factor. When reading data from a parquet table, the�requested columns of the row group are loaded into memory. Memory is allocated 8 MB by default. Ensure that memory occupation does not exceed physical memory when setting `ROWGROUPSIZE` or `PAGESIZE`, otherwise you may encounter an out of memory erro
 r.�
+
+**Bulk vs. trickle loads**
+Only bulk loads are recommended for use with parquet tables. Trickle loads can result in bloated footers and larger data files.
+
+## <a id="parquetexamples"></a>Parquet Examples
+
+**Parquet Example 1**
+
+Create an append-only table using the parquet format:
+
+``` pre
+CREATE TABLE customer ( id integer, fname text, lname text,
+    address text, city text, state text, zip text )
+WITH (APPENDONLY=true, ORIENTATION=parquet, OIDS=FALSE)
+DISTRIBUTED BY (id);
+```
+
+**Parquet Example 2**
+
+Create a parquet table with twelve monthly partitions:
+
+``` pre
+CREATE TABLE sales (id int, date date, amt decimal(10,2))
+WITH (APPENDONLY=true, ORIENTATION=parquet, OIDS=FALSE)
+DISTRIBUTED BY (id)
+PARTITION BY RANGE (date)
+  ( START (date '2016-01-01') INCLUSIVE
+    END   (date '2017-01-01') EXCLUSIVE
+    EVERY (INTERVAL '1 month')
+  );
+```
+
+**Parquet Example 3**
+
+Add a new partition to the sales table:
+
+``` pre
+ALTER TABLE sales ADD PARTITION
+    START (date '2017-01-01') INCLUSIVE
+    END (date '2017-02-01') EXCLUSIVE;
+```
+
+## <a id="aoexamples"></a>AO Examples
+
+Append-only tables support `ZLIB` and `SNAPPY` compression types.
+
+**AO Example 1**:�Create a table named rank in the schema named baby and distribute the data using the columns rank, gender, and year:
+
+``` pre
+CREATE TABLE baby.rank ( id int, rank int, year smallint, gender char(1), count int )
+DISTRIBUTED BY (rank, gender, year);
+```
+
+**AO Example 2**:�Create table films and table distributors. The first column will be used as the HAWQ distribution key by default:
+
+``` pre
+CREATE TABLE films (
+    code char(5), title varchar(40) NOT NULL, did integer NOT NULL,
+    date_prod date, kind varchar(10), len interval hour to minute
+);
+
+CREATE TABLE distributors (
+    did integer,
+    name varchar(40) NOT NULL CHECK (name <> '')
+);
+```
+
+**AO Example 3**:�Create a snappy-compressed, append-only table:
+
+``` pre
+CREATE TABLE sales (txn_id int, qty int, date date)
+WITH (appendonly=true, compresstype=snappy)
+DISTRIBUTED BY (txn_id);
+```
+
+**AO Example 4**:�Create a three level partitioned table using subpartition templates and default partitions at each level:
+
+``` pre
+CREATE TABLE sales (id int, year int, month int, day int,
+region text)
+DISTRIBUTED BY (id)
+PARTITION BY RANGE (year)
+SUBPARTITION BY RANGE (month)
+SUBPARTITION TEMPLATE (
+START (1) END (13) EVERY (1),
+DEFAULT SUBPARTITION other_months )
+SUBPARTITION BY LIST (region)
+SUBPARTITION TEMPLATE (
+SUBPARTITION usa VALUES ('usa'),
+SUBPARTITION europe VALUES ('europe'),
+SUBPARTITION asia VALUES ('asia'),
+DEFAULT SUBPARTITION other_regions)
+( START (2002) END (2010) EVERY (1),
+DEFAULT PARTITION outlying_years);
+```
+
+**AO Example 5** Create a hash-distributed table named \u201csales\u201d with 100 buckets.
+
+``` pre
+CREATE TABLE sales(id int, profit float)
+WITH (bucketnum=100)
+DISTRIBUTED BY (id);
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The `CREATE TABLE` command conforms to the SQL standard, with the following exceptions:
+
+-   **Temporary Tables** \u2014 In the SQL standard, temporary tables are defined just once and automatically exist (starting with empty contents) in every session that needs them. HAWQ instead requires each session to issue its own `CREATE TEMPORARY                   TABLE` command for each temporary table to be used. This allows different sessions to use the same temporary table name for different purposes, whereas the standard's approach constrains all instances of a given temporary table name to have the same table structure.
+
+    The standard's distinction between global and local temporary tables is not in HAWQ. HAWQ will accept the `GLOBAL` and `LOCAL` keywords in a temporary table declaration, but they have no effect.
+
+    If the `ON COMMIT` clause is omitted, the SQL standard specifies that the default behavior as `ON COMMIT DELETE ROWS`. However, the default behavior in HAWQ is `ON COMMIT PRESERVE ROWS`. The `ON COMMIT DROP` option does not exist in the SQL standard.
+
+-   **Column Check Constraints** \u2014 The SQL standard says that `CHECK` column constraints may only refer to the column they apply to; only `CHECK` table constraints may refer to multiple columns. HAWQ does not enforce this restriction; it treats column and table check constraints alike.
+-   **NULL Constraint** \u2014 The `NULL` constraint is a HAWQ extension to the SQL standard that is included for compatibility with some other database systems (and for symmetry with the `NOT NULL` constraint). Since it is the default for any column, its presence is not required.
+-   **Inheritance** \u2014 Multiple inheritance via the `INHERITS` clause is a HAWQ language extension. SQL:1999 and later define single inheritance using a different syntax and different semantics. SQL:1999-style inheritance is not yet supported by HAWQ.
+-   **Partitioning** \u2014 Table partitioning via the `PARTITION BY` clause is a HAWQ language extension.
+-   **Zero-column tables** \u2014 HAWQ allows a table of no columns to be created (for example, `CREATE TABLE foo();`). This is an extension from the SQL standard, which does not allow zero-column tables. Zero-column tables are not in themselves very useful, but disallowing them creates odd special cases for `ALTER TABLE DROP COLUMN`, so this spec restriction is ignored.
+-   **WITH clause** \u2014 The `WITH` clause is an extension; neither storage parameters nor OIDs are in the standard.
+-   **Tablespaces** \u2014 The HAWQ concept of tablespaces is not part of the SQL standard. The clauses `TABLESPACE` and `USING INDEX TABLESPACE` are extensions.
+-   **Data Distribution** \u2014 The HAWQ concept of a parallel or distributed database is not part of the SQL standard. The `DISTRIBUTED` clauses are extensions.
+
+## <a id="topic1__section8"></a>See Also
+
+[ALTER TABLE](ALTER-TABLE.html), [DROP TABLE](DROP-TABLE.html), [CREATE EXTERNAL TABLE](CREATE-EXTERNAL-TABLE.html), [CREATE TABLE AS](CREATE-TABLE-AS.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CREATE-TABLESPACE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-TABLESPACE.html.md.erb b/reference/sql/CREATE-TABLESPACE.html.md.erb
new file mode 100644
index 0000000..2d20107
--- /dev/null
+++ b/reference/sql/CREATE-TABLESPACE.html.md.erb
@@ -0,0 +1,58 @@
+---
+title: CREATE TABLESPACE
+---
+
+Defines a new tablespace.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE TABLESPACE <tablespace_name> [OWNER <username>]
+�������FILESPACE <filespace_name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE TABLESPACE` registers a new tablespace for your HAWQ system. The tablespace name must be distinct from the name of any existing tablespace in the system.
+
+A tablespace allows superusers to define an alternative location on the file system where the data files containing database objects (such as tables) may reside.
+
+A user with appropriate privileges can pass a tablespace name to [CREATE DATABASE](CREATE-DATABASE.html) or [CREATE TABLE](CREATE-TABLE.html) to have the data files for these objects stored within the specified tablespace.
+
+In HAWQ, there must be a file system location defined for the master and each segment in order for the tablespace to have a location to store its objects across an entire HAWQ system. This collection of file system locations is defined in a filespace object. A filespace must be defined before you can create a tablespace. See [hawq filespace](../cli/admin_utilities/hawqfilespace.html#topic1) for more information.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<tablespacename\>   </dt>
+<dd>The name of a tablespace to be created. The name cannot begin with `pg_`, as such names are reserved for system tablespaces.</dd>
+
+<dt>OWNER \<username\>   </dt>
+<dd>The name of the user who will own the tablespace. If omitted, defaults to the user executing the command. Only superusers may create tablespaces, but they can assign ownership of tablespaces to non-superusers.</dd>
+
+<dt>FILESPACE \<filespace\_name\>   </dt>
+<dd>The name of a HAWQ filespace that was defined using the `hawq filespace` management utility.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+You must first create a filespace to be used by the tablespace. See [hawq filespace](../cli/admin_utilities/hawqfilespace.html#topic1) for more information.
+
+Tablespaces are only supported on systems that support symbolic links.
+
+`CREATE TABLESPACE` cannot be executed inside a transaction block.
+
+## <a id="topic1__section6"></a>Examples
+
+Create a new tablespace by specifying the corresponding filespace to use:
+
+``` pre
+CREATE TABLESPACE mytblspace FILESPACE myfilespace;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE TABLESPACE` is a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE DATABASE](CREATE-DATABASE.html), [CREATE TABLE](CREATE-TABLE.html), [DROP TABLESPACE](DROP-TABLESPACE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CREATE-TYPE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-TYPE.html.md.erb b/reference/sql/CREATE-TYPE.html.md.erb
new file mode 100644
index 0000000..9e7b59f
--- /dev/null
+++ b/reference/sql/CREATE-TYPE.html.md.erb
@@ -0,0 +1,185 @@
+---
+title: CREATE TYPE
+---
+
+Defines a new data type.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE TYPE <name> AS ( <attribute_name>
+            <data_type> [, ... ] )
+
+CREATE TYPE <name> (
+    INPUT = <input_function>,
+    OUTPUT = <output_function>
+    [, RECEIVE = <receive_function>]
+    [, SEND = <send_function>]
+    [, INTERNALLENGTH = {<internallength> | VARIABLE}]
+    [, PASSEDBYVALUE]
+    [, ALIGNMENT = <alignment>]
+    [, STORAGE = <storage>]
+    [, DEFAULT = <default>]
+    [, ELEMENT = <element>]
+    [, DELIMITER = <delimiter>] )
+
+CREATE TYPE name
+
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE TYPE` registers a new data type for use in the current database. The user who defines a type becomes its owner.
+
+If a schema name is given then the type is created in the specified schema. Otherwise it is created in the current schema. The type name must be distinct from the name of any existing type or domain in the same schema. The type name must also be distinct from the name of any existing table in the same schema.
+
+**Composite Types**
+
+The first form of `CREATE TYPE` creates a composite type. This is the only form currently supported by HAWQ. The composite type is specified by a list of attribute names and data types. This is essentially the same as the row type of a table, but using `CREATE TYPE` avoids the need to create an actual table when all that is wanted is to define a type. A stand-alone composite type is useful as the argument or return type of a function.
+
+**Base Types**
+
+The second form of `CREATE TYPE` creates a new base type (scalar type). The parameters may appear in any order, not only that shown in the syntax, and most are optional. You must register two or more functions (using `CREATE FUNCTION`) before defining the type. The support functions \<input\_function\> and \<output\_function\> are required, while the functions \<receive\_function\>, \<send\_function\> and \<analyze\_function\> are optional. Generally these functions have to be coded in C or another low-level language. In HAWQ, any function used to implement a data type must be defined as `IMMUTABLE`.
+
+The \<input\_function\> converts the type's external textual representation to the internal representation used by the operators and functions defined for the type. \<output\_function\> performs the reverse transformation. The input function may be declared as taking one argument of type `cstring`, or as taking three arguments of types `cstring`, `oid`, `integer`. The first argument is the input text as a C string, the second argument is the type's own OID (except for array types, which instead receive their element type's OID), and the third is the `typmod` of the destination column, if known (`-1` will be passed if not). The input function must return a value of the data type itself. Usually, an input function should be declared `STRICT`; if it is not, it will be called with a `NULL` first parameter when reading a `NULL` input value. The function must still return `NULL` in this case, unless it raises an error. (This case is mainly meant to support domain input functions, which ma
 y need to reject `NULL` inputs.) The output function must be declared as taking one argument of the new data type. The output function must return type `cstring`. Output functions are not invoked for `NULL` values.
+
+The optional \<receive\_function\> converts the type's external binary representation to the internal representation. If this function is not supplied, the type cannot participate in binary input. The binary representation should be chosen to be cheap to convert to internal form, while being reasonably portable. (For example, the standard integer data types use network byte order as the external binary representation, while the internal representation is in the machine's native byte order.) The receive function should perform adequate checking to ensure that the value is valid. The receive function may be declared as taking one argument of type `internal`, or as taking three arguments of types `internal`, `oid`, `integer`. The first argument is a pointer to a `StringInfo` buffer holding the received byte string; the optional arguments are the same as for the text input function. The receive function must return a value of the data type itself. Usually, a receive function should be d
 eclared `STRICT`; if it is not, it will be called with a `NULL` first parameter when reading a NULL input value. The function must still return `NULL` in this case, unless it raises an error. (This case is mainly meant to support domain receive functions, which may need to reject `NULL` inputs.) Similarly, the optional \<send\_function\> converts from the internal representation to the external binary representation. If this function is not supplied, the type cannot participate in binary output. The send function must be declared as taking one argument of the new data type. The send function must return type `bytea`. Send functions are not invoked for `NULL` values.
+
+You should at this point be wondering how the input and output functions can be declared to have results or arguments of the new type, when they have to be created before the new type can be created. The answer is that the type should first be defined as a shell type, which is a placeholder type that has no properties except a name and an owner. This is done by issuing the command `CREATE TYPE                         name                `, with no additional parameters. Then the I/O functions can be defined referencing the shell type. Finally, `CREATE                         TYPE` with a full definition replaces the shell entry with a complete, valid type definition, after which the new type can be used normally.
+
+While the details of the new type's internal representation are only known to the I/O functions and other functions you create to work with the type, there are several properties of the internal representation that must be declared to HAWQ. Foremost of these is \<internallength\>. Base data types can be fixed-length, in which case \<internallength\> is a positive integer, or variable length, indicated by setting \<internallength\> to `VARIABLE`. (Internally, this is represented by setting `typlen` to `-1`.) The internal representation of all variable-length types must start with a 4-byte integer giving the total length of this value of the type.
+
+The optional flag `PASSEDBYVALUE` indicates that values of this data type are passed by value, rather than by reference. You may not pass by value types whose internal representation is larger than the size of the `Datum` type (4 bytes on most machines, 8 bytes on a few).
+
+The \<alignment\> parameter specifies the storage alignment required for the data type. The allowed values equate to alignment on 1, 2, 4, or 8 byte boundaries. Note that variable-length types must have an alignment of at least 4, since they necessarily contain an `int4` as their first component.
+
+The \<storage\> parameter allows selection of storage strategies for variable-length data types. (Only `plain` is allowed for fixed-length types.) `plain` specifies that data of the type will always be stored in-line and not compressed. `extended` specifies that the system will first try to compress a long data value, and will move the value out of the main table row if it's still too long. `external` allows the value to be moved out of the main table, but the system will not try to compress it. `main` allows compression, but discourages moving the value out of the main table. (Data items with this storage strategy may still be moved out of the main table if there is no other way to make a row fit, but they will be kept in the main table preferentially over `extended` and `external` items.)
+
+A default value may be specified, in case a user wants columns of the data type to default to something other than the null value. Specify the default with the `DEFAULT` key word. (Such a default may be overridden by an explicit `DEFAULT` clause attached to a particular column.)
+
+To indicate that a type is an array, specify the type of the array elements using the `ELEMENT` key word. For example, to define an array of 4-byte integers (int4), specify `ELEMENT = int4`. More details about array types appear below.
+
+To indicate the delimiter to be used between values in the external representation of arrays of this type, `delimiter` can be set to a specific character. The default delimiter is the comma (,). Note that the delimiter is associated with the array element type, not the array type itself.
+
+**Array Types**
+
+Whenever a user-defined base data type is created, HAWQ automatically creates an associated array type, whose name consists of the base type's name prepended with an underscore. The parser understands this naming convention, and translates requests for columns of type `foo[]` into requests for type `_foo`. The implicitly-created array type is variable length and uses the built-in input and output functions `array_in` and `array_out`.
+
+You might reasonably ask why there is an `ELEMENT` option, if the system makes the correct array type automatically. The only case where it's useful to use `ELEMENT` is when you are making a fixed-length type that happens to be internally an array of a number of identical things, and you want to allow these things to be accessed directly by subscripting, in addition to whatever operations you plan to provide for the type as a whole. For example, type `name` allows its constituent `char` elements to be accessed this way. A 2-D point type could allow its two component numbers to be accessed like point\[0\] and point\[1\]. Note that this facility only works for fixed-length types whose internal form is exactly a sequence of identical fixed-length fields. A subscriptable variable-length type must have the generalized internal representation used by `array_in` and `array_out`. For historical reasons, subscripting of fixed-length array types starts from zero, rather than from one as for v
 ariable-length arrays.
+
+## <a id="topic1__section7"></a>Parameters
+
+<dt> \<name\>  </dt>
+<dd>The name (optionally schema-qualified) of a type to be created.</dd>
+
+<dt> \<attribute\_name\>  </dt>
+<dd>The name of an attribute (column) for the composite type.</dd>
+
+<dt> \<data\_type\>  </dt>
+<dd>The name of an existing data type to become a column of the composite type.</dd>
+
+<dt> \<input\_function\>  </dt>
+<dd>The name of a function that converts data from the type's external textual form to its internal form.</dd>
+
+<dt> \<output\_function\>  </dt>
+<dd>The name of a function that converts data from the type's internal form to its external textual form.</dd>
+
+<dt> \<receive\_function\>  </dt>
+<dd>The name of a function that converts data from the type's external binary form to its internal form.</dd>
+
+<dt> \<send\_function\>  </dt>
+<dd>The name of a function that converts data from the type's internal form to its external binary form.</dd>
+
+<dt> \<internallength\>  </dt>
+<dd>A numeric constant that specifies the length in bytes of the new type's internal representation. The default assumption is that it is variable-length.</dd>
+
+<dt> \<alignment\>  </dt>
+<dd>The storage alignment requirement of the data type. Must be one of `char`, `int2`, `int4`, or `double`. The default is `int4`.</dd>
+
+<dt> \<storage\>  </dt>
+<dd>The storage strategy for the data type. Must be one of `plain`, `external`, `extended`, or `main`. The default is `plain`.</dd>
+
+<dt> \<default\>  </dt>
+<dd>The default value for the data type. If this is omitted, the default is null.</dd>
+
+<dt> \<element\>  </dt>
+<dd>The type being created is an array; this specifies the type of the array elements.</dd>
+
+<dt> \<delimiter\>  </dt>
+<dd>The delimiter character to be used between values in arrays made of this type.</dd>
+
+## <a id="topic1__section8"></a>Notes
+
+User-defined type names cannot begin with the underscore character (\_) and can only be 62 characters long (or in general `NAMEDATALEN - 2`, rather than the `NAMEDATALEN - 1` characters allowed for other names). Type names beginning with underscore are reserved for internally-created array type names.
+
+Because there are no restrictions on use of a data type once it's been created, creating a base type is tantamount to granting public execute permission on the functions mentioned in the type definition. (The creator of the type is therefore required to own these functions.) This is usually not an issue for the sorts of functions that are useful in a type definition. But you might want to think twice before designing a type in a way that would require 'secret' information to be used while converting it to or from external form.
+
+## <a id="topic1__section9"></a>Examples
+
+This example creates a composite type and uses it in a function definition:
+
+``` pre
+CREATE TYPE compfoo AS (f1 int, f2 text);
+
+CREATE FUNCTION getfoo() RETURNS SETOF compfoo AS $$
+    SELECT fooid, fooname FROM foo
+$$ LANGUAGE SQL;
+```
+
+This example creates the base data type `box` and then uses the type in a table definition:
+
+``` pre
+CREATE TYPE box;
+
+CREATE FUNCTION my_box_in_function(cstring) RETURNS box AS
+... ;
+
+CREATE FUNCTION my_box_out_function(box) RETURNS cstring AS
+... ;
+
+CREATE TYPE box (
+    INTERNALLENGTH = 16,
+    INPUT = my_box_in_function,
+    OUTPUT = my_box_out_function
+);
+
+CREATE TABLE myboxes (
+    id integer,
+    description box
+);
+```
+
+If the internal structure of `box` were an array of four `float4` elements, we might instead use:
+
+``` pre
+CREATE TYPE box (
+    INTERNALLENGTH = 16,
+    INPUT = my_box_in_function,
+    OUTPUT = my_box_out_function,
+    ELEMENT = float4
+);
+```
+
+which would allow a box value's component numbers to be accessed by subscripting. Otherwise the type behaves the same as before.
+
+This example creates a large object type and uses it in a table definition:
+
+``` pre
+CREATE TYPE bigobj (
+    INPUT = lo_filein, OUTPUT = lo_fileout,
+    INTERNALLENGTH = VARIABLE
+);
+
+CREATE TABLE big_objs (
+    id integer,
+    obj bigobj
+);
+```
+
+## <a id="topic1__section10"></a>Compatibility
+
+`CREATE TYPE` command is a HAWQ extension. There is a `CREATE                     TYPE` statement in the SQL standard that is rather different in detail.
+
+## <a id="topic1__section11"></a>See Also
+
+[CREATE FUNCTION](CREATE-FUNCTION.html), [ALTER TYPE](ALTER-TYPE.html), [DROP TYPE](DROP-TYPE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CREATE-USER.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-USER.html.md.erb b/reference/sql/CREATE-USER.html.md.erb
new file mode 100644
index 0000000..738c645
--- /dev/null
+++ b/reference/sql/CREATE-USER.html.md.erb
@@ -0,0 +1,46 @@
+---
+title: CREATE USER
+---
+
+Defines a new database role with the `LOGIN` privilege by default.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE USER <name> [ [WITH] <option> [ ... ] ]
+```
+
+where \<option\> can be:
+
+``` pre
+      SUPERUSER | NOSUPERUSER
+    | CREATEDB | NOCREATEDB
+    | CREATEROLE | NOCREATEROLE
+    | CREATEUSER | NOCREATEUSER
+    | INHERIT | NOINHERIT
+    | LOGIN | NOLOGIN
+    | [ ENCRYPTED | UNENCRYPTED ] PASSWORD '<password>'
+    | VALID UNTIL '<timestamp>'
+    | IN ROLE <rolename> [, ...]
+    | IN GROUP <rolename> [, ...]
+    | ROLE <rolename> [, ...]
+    | ADMIN <rolename> [, ...]
+    | USER <rolename> [, ...]
+    | SYSID <uid>
+����| RESOURCE QUEUE <queue_name>
+
+```
+
+## <a id="topic1__section3"></a>Description
+
+HAWQ does not support `CREATE USER`. This command has been replaced by [CREATE ROLE](CREATE-ROLE.html).
+
+The only difference between `CREATE ROLE` and `CREATE                     USER` is that `LOGIN` is assumed by default with `CREATE USER`, whereas `NOLOGIN` is assumed by default with `CREATE ROLE`.
+
+## <a id="topic1__section4"></a>Compatibility
+
+There is no `CREATE USER` statement in the SQL standard.
+
+## <a id="topic1__section5"></a>See Also
+
+[CREATE ROLE](CREATE-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CREATE-VIEW.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-VIEW.html.md.erb b/reference/sql/CREATE-VIEW.html.md.erb
new file mode 100644
index 0000000..4a24bb7
--- /dev/null
+++ b/reference/sql/CREATE-VIEW.html.md.erb
@@ -0,0 +1,88 @@
+---
+title: CREATE VIEW
+---
+
+Defines a new view.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [OR REPLACE] [TEMP | TEMPORARY] VIEW <name>
+�������[ ( <column_name> [, ...] ) ]
+�������AS <query>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE VIEW` defines a view of a query. The view is not physically materialized. Instead, the query is run every time the view is referenced in a query.
+
+`CREATE OR REPLACE VIEW` is similar, but if a view of the same name already exists, it is replaced. You can only replace a view with a new query that generates the identical set of columns (same column names and data types).
+
+If a schema name is given then the view is created in the specified schema. Otherwise it is created in the current schema. Temporary views exist in a special schema, so a schema name may not be given when creating a temporary view. The name of the view must be distinct from the name of any other view, table, sequence, or index in the same schema.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>TEMPORARY | TEMP  </dt>
+<dd>If specified, the view is created as a temporary view. Temporary views are automatically dropped at the end of the current session. Existing permanent relations with the same name are not visible to the current session while the temporary view exists, unless they are referenced with schema-qualified names. If any of the tables referenced by the view are temporary, the view is created as a temporary view (whether `TEMPORARY` is specified or not).</dd>
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of a view to be created.</dd>
+
+<dt> \<column\_name\>   </dt>
+<dd>An optional list of names to be used for columns of the view. If not given, the column names are deduced from the query.</dd>
+
+<dt> \<query\>   </dt>
+<dd>A [SELECT](SELECT.html) command which will provide the columns and rows of the view.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Views in HAWQ are read only. The system will not allow an insert, update, or delete on a view. You can get the effect of an updatable view by creating rewrite rules on the view into appropriate actions on other tables. For more information see `CREATE RULE`.
+
+Be careful that the names and data types of the view's columns will be assigned the way you want. For example, if you run the following command:
+
+``` pre
+CREATE VIEW vista AS SELECT 'Hello World';
+```
+
+The result is poor: the column name defaults to `?column?`, and the column data type defaults to `unknown`. If you want a string literal in a view's result, use the following command:
+
+``` pre
+CREATE VIEW vista AS SELECT text 'Hello World' AS hello;
+```
+
+Check that you have permission to access the tables referenced in the view. View ownership determines permissions, not your status as current user. This is true, even if you are a superuser. This concept is unusual, since superusers typically have access to all objects. In the case of views, even superusers must be explicitly granted access to tables referenced if they do not own the view.
+
+However, functions called in the view are treated the same as if they had been called directly from the query using the view. Therefore the user of a view must have permissions to call any functions used by the view.
+
+If you create a view with an `ORDER BY` clause, the `ORDER           BY` clause is ignored when you do a `SELECT` from the view.
+
+## <a id="topic1__section6"></a>Examples
+
+Create a view consisting of all comedy films:
+
+``` pre
+CREATE VIEW comedies AS SELECT * FROM films WHERE kind = 
+'comedy';
+```
+
+Create a view that gets the top ten ranked baby names:
+
+``` pre
+CREATE VIEW topten AS SELECT name, rank, gender, year FROM 
+names, rank WHERE rank < '11' AND names.id=rank.id;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The SQL standard specifies some additional capabilities for the `CREATE           VIEW` statement that are not in HAWQ. The optional clauses for the full SQL command in the standard are:
+
+-   **CHECK OPTION** \u2014 This option has to do with updatable views. All `INSERT` and `UPDATE` commands on the view will be checked to ensure data satisfy the view-defining condition (that is, the new data would be visible through the view). If they do not, the update will be rejected.
+-   **LOCAL** \u2014 Check for integrity on this view.
+-   **CASCADED** \u2014 Check for integrity on this view and on any dependent view. `CASCADED` is assumed if neither `CASCADED` nor `LOCAL` is specified.
+
+`CREATE OR REPLACE VIEW` is a HAWQ language extension. So is the concept of a temporary view.
+
+## <a id="topic1__section8"></a>See Also
+
+[SELECT](SELECT.html), [DROP VIEW](DROP-VIEW.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DEALLOCATE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DEALLOCATE.html.md.erb b/reference/sql/DEALLOCATE.html.md.erb
new file mode 100644
index 0000000..846f282
--- /dev/null
+++ b/reference/sql/DEALLOCATE.html.md.erb
@@ -0,0 +1,42 @@
+---
+title: DEALLOCATE
+---
+
+Deallocates a prepared statement.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DEALLOCATE [PREPARE] <name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DEALLOCATE` is used to deallocate a previously prepared SQL statement. If you do not explicitly deallocate a prepared statement, it is deallocated when the session ends.
+
+For more information on prepared statements, see [PREPARE](PREPARE.html).
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>PREPARE  </dt>
+<dd>Optional key word which is ignored.</dd>
+
+<dt>\<name\>  </dt>
+<dd>The name of the prepared statement to deallocate.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Deallocated the previously prepared statement named `insert_names`:
+
+``` pre
+DEALLOCATE insert_names;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+The SQL standard includes a `DEALLOCATE` statement, but it is only for use in embedded SQL.
+
+## <a id="topic1__section7"></a>See Also
+
+[EXECUTE](EXECUTE.html), [PREPARE](PREPARE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DECLARE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DECLARE.html.md.erb b/reference/sql/DECLARE.html.md.erb
new file mode 100644
index 0000000..d6fed83
--- /dev/null
+++ b/reference/sql/DECLARE.html.md.erb
@@ -0,0 +1,84 @@
+---
+title: DECLARE
+---
+
+Defines a cursor.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DECLARE <name> [BINARY] [INSENSITIVE] [NO SCROLL] CURSOR
+�����[{WITH | WITHOUT} HOLD]
+�����FOR <query> [FOR READ ONLY]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DECLARE` allows a user to create cursors, which can be used to retrieve a small number of rows at a time out of a larger query. Cursors can return data either in text or in binary format using [FETCH](FETCH.html).
+
+Normal cursors return data in text format, the same as a `SELECT` would produce. Since data is stored natively in binary format, the system must do a conversion to produce the text format. Once the information comes back in text form, the client application may need to convert it to a binary format to manipulate it. In addition, data in the text format is often larger in size than in the binary format. Binary cursors return the data in a binary representation that may be more easily manipulated. Nevertheless, if you intend to display the data as text anyway, retrieving it in text form will save you some effort on the client side.
+
+As an example, if a query returns a value of one from an integer column, you would get a string of 1 with a default cursor whereas with a binary cursor you would get a 4-byte field containing the internal representation of the value (in big-endian byte order).
+
+Binary cursors should be used carefully. Many applications, including psql, are not prepared to handle binary cursors and expect data to come back in the text format.
+
+**Note:**
+When the client application uses the 'extended query' protocol to issue a `FETCH` command, the Bind protocol message specifies whether data is to be retrieved in text or binary format. This choice overrides the way that the cursor is defined. The concept of a binary cursor as such is thus obsolete when using extended query protocol \u2014 any cursor can be treated as either text or binary.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>\<name\>  </dt>
+<dd>The name of the cursor to be created.</dd>
+
+<dt>BINARY  </dt>
+<dd>Causes the cursor to return data in binary rather than in text format.</dd>
+
+<dt>INSENSITIVE  </dt>
+<dd>Indicates that data retrieved from the cursor should be unaffected by updates to the tables underlying the cursor while the cursor exists. In HAWQ, all cursors are insensitive. This key word currently has no effect and is present for compatibility with the SQL standard.</dd>
+
+<dt>NO SCROLL  </dt>
+<dd>A cursor cannot be used to retrieve rows in a nonsequential fashion. This is the default behavior in HAWQ, since scrollable cursors (`SCROLL`) are not supported.</dd>
+
+<dt>WITH HOLD  
+WITHOUT HOLD  </dt>
+<dd>`WITH HOLD` specifies that the cursor may continue to be used after the transaction that created it successfully commits. `WITHOUT HOLD` specifies that the cursor cannot be used outside of the transaction that created it. `WITHOUT HOLD` is the default.</dd>
+
+<dt>\<query\> </dt>
+<dd>A [SELECT](SELECT.html) command which will provide the rows to be returned by the cursor.</dd>
+
+<!-- -->
+
+<dt>FOR READ ONLY  </dt>
+<dd>`FOR READ ONLY` indicates that the cursor is used in a read-only mode. Cursors can only be used in a read-only mode in HAWQ. HAWQ does not support updatable cursors (FOR UPDATE), so this is the default behavior.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Unless `WITH HOLD` is specified, the cursor created by this command can only be used within the current transaction. Thus, `DECLARE` without `WITH           HOLD` is useless outside a transaction block: the cursor would survive only to the completion of the statement. Therefore HAWQ reports an error if this command is used outside a transaction block. Use `BEGIN`, `COMMIT` and `ROLLBACK` to define a transaction block.
+
+If `WITH HOLD` is specified and the transaction that created the cursor successfully commits, the cursor can continue to be accessed by subsequent transactions in the same session. (But if the creating transaction is aborted, the cursor is removed.) A cursor created with `WITH HOLD` is closed when an explicit `CLOSE` command is issued on it, or the session ends. In the current implementation, the rows represented by a held cursor are copied into a temporary file or memory area so that they remain available for subsequent transactions.
+
+Scrollable cursors are not currently supported in HAWQ. You can only use `FETCH` to move the cursor position forward, not backwards.
+
+You can see all available cursors by querying the `pg_cursors` system view.
+
+## <a id="topic1__section6"></a>Examples
+
+Declare a cursor:
+
+``` pre
+DECLARE mycursor CURSOR FOR SELECT * FROM mytable;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+SQL standard allows cursors only in embedded SQL and in modules. HAWQ permits cursors to be used interactively.
+
+HAWQ does not implement an `OPEN` statement for cursors. A cursor is considered to be open when it is declared.
+
+The SQL standard allows cursors to move both forward and backward. All HAWQ cursors are forward moving only (not scrollable).
+
+Binary cursors are a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[CLOSE](CLOSE.html), [FETCH](FETCH.html), [SELECT](SELECT.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-AGGREGATE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-AGGREGATE.html.md.erb b/reference/sql/DROP-AGGREGATE.html.md.erb
new file mode 100644
index 0000000..f40ca5f
--- /dev/null
+++ b/reference/sql/DROP-AGGREGATE.html.md.erb
@@ -0,0 +1,48 @@
+---
+title: DROP AGGREGATE
+---
+
+Removes an aggregate function.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP AGGREGATE [IF EXISTS] <name> ( <type> [, ...] ) [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP AGGREGATE` will delete an existing aggregate function. To execute this command the current user must be the owner of the aggregate function.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the aggregate does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing aggregate function.</dd>
+
+<dt>\<type\>   </dt>
+<dd>An input data type on which the aggregate function operates. To reference a zero-argument aggregate function, write `*` in place of the list of input data types.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the aggregate function.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the aggregate function if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+To remove the aggregate function `myavg` for type `integer`:
+
+``` pre
+DROP AGGREGATE myavg(integer);
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+There is no `DROP AGGREGATE` statement in the SQL standard.
+
+## <a id="topic1__section7"></a>See Also
+
+[ALTER AGGREGATE](ALTER-AGGREGATE.html), [CREATE AGGREGATE](CREATE-AGGREGATE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-DATABASE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-DATABASE.html.md.erb b/reference/sql/DROP-DATABASE.html.md.erb
new file mode 100644
index 0000000..d8ae296
--- /dev/null
+++ b/reference/sql/DROP-DATABASE.html.md.erb
@@ -0,0 +1,48 @@
+---
+title: DROP DATABASE
+---
+
+Removes a database.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP DATABASE [IF EXISTS] <name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP DATABASE` drops a database. It removes the catalog entries for the database and deletes the directory containing the data. It can only be executed by the database owner. Also, it cannot be executed while you or anyone else are connected to the target database. (Connect to `template1` or any other database to issue this command.)
+
+**Warning:** `DROP DATABASE` cannot be undone. Use it with care!
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the database does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name of the database to remove.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+`DROP DATABASE` cannot be executed inside a transaction block.
+
+This command cannot be executed while connected to the target database. Thus, it might be more convenient to use the program `dropdb` instead, which is a wrapper around this command.
+
+## <a id="topic1__section6"></a>Examples
+
+Drop the database named `testdb`:
+
+``` pre
+DROP DATABASE testdb;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+There is no `DROP DATABASE` statement in the SQL standard.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE DATABASE](CREATE-DATABASE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-EXTERNAL-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-EXTERNAL-TABLE.html.md.erb b/reference/sql/DROP-EXTERNAL-TABLE.html.md.erb
new file mode 100644
index 0000000..01d0fb1
--- /dev/null
+++ b/reference/sql/DROP-EXTERNAL-TABLE.html.md.erb
@@ -0,0 +1,48 @@
+---
+title: DROP EXTERNAL TABLE
+---
+
+Removes an external table definition.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP EXTERNAL [WEB] TABLE [IF EXISTS] <name> [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP EXTERNAL TABLE` drops an existing external table definition from the database system. The external data sources or files are not deleted. To execute this command you must be the owner of the external table.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>WEB  </dt>
+<dd>Optional keyword for dropping external web tables.</dd>
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the external table does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing external table.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the external table (such as views).</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the external table if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the external table named `staging` if it exists:
+
+``` pre
+DROP EXTERNAL TABLE IF EXISTS staging;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+There is no `DROP EXTERNAL TABLE` statement in the SQL standard.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE EXTERNAL TABLE](CREATE-EXTERNAL-TABLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-FILESPACE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-FILESPACE.html.md.erb b/reference/sql/DROP-FILESPACE.html.md.erb
new file mode 100644
index 0000000..afae3fe
--- /dev/null
+++ b/reference/sql/DROP-FILESPACE.html.md.erb
@@ -0,0 +1,42 @@
+---
+title: DROP FILESPACE
+---
+
+Removes a filespace.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP FILESPACE [IF EXISTS]  <filespacename>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP FILESPACE` removes a filespace definition and its system-generated data directories from the system.
+
+A filespace can only be dropped by its owner or a superuser. The filespace must be empty of all tablespace objects before it can be dropped. It is possible that tablespaces in other databases may still be using a filespace even if no tablespaces in the current database are using the filespace.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the filespace does not exist. A notice is issued in this case.</dd>
+
+<dt>\<filespacename>   </dt>
+<dd>The name of the filespace to remove.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the tablespace `myfs`:
+
+``` pre
+DROP FILESPACE myfs;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+There is no `DROP FILESPACE` statement in the SQL standard or in PostgreSQL.
+
+## <a id="topic1__section7"></a>See Also
+
+[DROP TABLESPACE](DROP-TABLESPACE.html), [hawq filespace](../cli/admin_utilities/hawqfilespace.html#topic1)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-FUNCTION.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-FUNCTION.html.md.erb b/reference/sql/DROP-FUNCTION.html.md.erb
new file mode 100644
index 0000000..5ebd4e5
--- /dev/null
+++ b/reference/sql/DROP-FUNCTION.html.md.erb
@@ -0,0 +1,55 @@
+---
+title: DROP FUNCTION
+---
+
+Removes a function.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP FUNCTION [IF EXISTS] <name> ( [ [<argmode>] [<argname>] <argtype> 
+    [, ...] ] ) [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP FUNCTION` removes the definition of an existing function. To execute this command the user must be the owner of the function. The argument types to the function must be specified, since several different functions may exist with the same name and different argument lists.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the function does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing function.</dd>
+
+<dt>\<argmode\>   </dt>
+<dd>The mode of an argument: either `IN`, `OUT`, or `INOUT`. If omitted, the default is IN. Note that `DROP               FUNCTION` does not actually pay any attention to `OUT` arguments, since only the input arguments are needed to determine the function's identity. So it is sufficient to list the `IN` and `INOUT` arguments.</dd>
+
+<dt>\<argname\>   </dt>
+<dd>The name of an argument. Note that `DROP FUNCTION` does not actually pay any attention to argument names, since only the argument data types are needed to determine the function's identity.</dd>
+
+<dt>\<argtype\>   </dt>
+<dd>The data type(s) of the function's arguments (optionally schema-qualified), if any.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the function such as operators.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the function if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Drop the square root function:
+
+``` pre
+DROP FUNCTION sqrt(integer);
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+A `DROP FUNCTION` statement is defined in the SQL standard, but it is not compatible with this command.
+
+## <a id="topic1__section7"></a>See Also
+
+[ALTER FUNCTION](ALTER-FUNCTION.html), [CREATE FUNCTION](CREATE-FUNCTION.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-GROUP.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-GROUP.html.md.erb b/reference/sql/DROP-GROUP.html.md.erb
new file mode 100644
index 0000000..5fce3ae
--- /dev/null
+++ b/reference/sql/DROP-GROUP.html.md.erb
@@ -0,0 +1,31 @@
+---
+title: DROP GROUP
+---
+
+Removes a database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP GROUP [IF EXISTS] <name> [, ...]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP GROUP` is an obsolete command, though still accepted for backwards compatibility. Groups (and users) have been superseded by the more general concept of roles. See [DROP ROLE](DROP-ROLE.html) for more information.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the role does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name of an existing role.</dd>
+
+## <a id="topic1__section5"></a>Compatibility
+
+There is no `DROP GROUP` statement in the SQL standard.
+
+## <a id="topic1__section6"></a>See Also
+
+[DROP ROLE](DROP-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-OPERATOR-CLASS.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-OPERATOR-CLASS.html.md.erb b/reference/sql/DROP-OPERATOR-CLASS.html.md.erb
new file mode 100644
index 0000000..da22425
--- /dev/null
+++ b/reference/sql/DROP-OPERATOR-CLASS.html.md.erb
@@ -0,0 +1,54 @@
+---
+title: DROP OPERATOR CLASS
+---
+
+Removes an operator class.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP OPERATOR CLASS [IF EXISTS] <name> USING <index_method> [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP OPERATOR` drops an existing operator class. To execute this command you must be the owner of the operator class.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the operator class does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing operator class.</dd>
+
+<dt>\<index\_method\>   </dt>
+<dd>The name of the index access method the operator class is for.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the operator class.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the operator class if any objects depend on it. This is the default.</dd>
+
+## Notes
+
+This command will not succeed if there are any existing indexes that use the operator class. Add `CASCADE` to drop such indexes along with the operator class.
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the B-tree operator class `widget_ops`:
+
+``` pre
+DROP OPERATOR CLASS widget_ops USING btree;
+```
+
+This command will not succeed if there are any existing indexes that use the operator class. Add `CASCADE` to drop such indexes along with the operator class.
+
+## <a id="topic1__section6"></a>Compatibility
+
+There is no `DROP OPERATOR CLASS` statement in the SQL standard.
+
+## <a id="topic1__section7"></a>See Also
+
+[ALTER OPERATOR](ALTER-OPERATOR.html), [CREATE OPERATOR](CREATE-OPERATOR.html) [CREATE OPERATOR CLASS](CREATE-OPERATOR-CLASS.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-OPERATOR.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-OPERATOR.html.md.erb b/reference/sql/DROP-OPERATOR.html.md.erb
new file mode 100644
index 0000000..b59fde4
--- /dev/null
+++ b/reference/sql/DROP-OPERATOR.html.md.erb
@@ -0,0 +1,64 @@
+---
+title: DROP OPERATOR
+---
+
+Removes an operator.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP OPERATOR [IF EXISTS] <name> ( {<lefttype> | NONE} , 
+    {<righttype> | NONE} ) [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP OPERATOR` drops an existing operator from the database system. To execute this command you must be the owner of the operator.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the operator does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing operator.</dd>
+
+<dt>\<lefttype\>  </dt>
+<dd>The data type of the operator's left operand; write `NONE` if the operator has no left operand.</dd>
+
+<dt>\<righttype\> </dt>
+<dd>The data type of the operator's right operand; write `NONE` if the operator has no right operand.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the operator.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the operator if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the power operator `a^b` for type `integer`:
+
+``` pre
+DROP OPERATOR ^ (integer, integer);
+```
+
+Remove the left unary bitwise complement operator `~b` for type `bit`:
+
+``` pre
+DROP OPERATOR ~ (none, bit);
+```
+
+Remove the right unary factorial operator `x!` for type `bigint`:
+
+``` pre
+DROP OPERATOR ! (bigint, none);
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+There is no `DROP OPERATOR` statement in the SQL standard.
+
+## <a id="topic1__section7"></a>See Also
+
+[ALTER OPERATOR](ALTER-OPERATOR.html), [CREATE OPERATOR](CREATE-OPERATOR.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-OWNED.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-OWNED.html.md.erb b/reference/sql/DROP-OWNED.html.md.erb
new file mode 100644
index 0000000..50c5272
--- /dev/null
+++ b/reference/sql/DROP-OWNED.html.md.erb
@@ -0,0 +1,50 @@
+---
+title: DROP OWNED
+---
+
+Removes database objects owned by a database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP OWNED BY <name> [, ...] [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP OWNED` drops all the objects in the current database that are owned by one of the specified roles. Any privileges granted to the given roles on objects in the current database will also be revoked.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>\<name\>   </dt>
+<dd>The name of a role whose objects will be dropped, and whose privileges will be revoked.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the affected objects.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the objects owned by a role if any other database objects depend on one of the affected objects. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+`DROP OWNED` is often used to prepare for the removal of one or more roles. Because `DROP OWNED` only affects the objects in the current database, it is usually necessary to execute this command in each database that contains objects owned by a role that is to be removed.
+
+Using the `CASCADE` option may make the command recurse to objects owned by other users.
+
+The `REASSIGN OWNED` command is an alternative that reassigns the ownership of all the database objects owned by one or more roles.
+
+## <a id="topic1__section6"></a>Examples
+
+Remove any database objects owned by the role named `sally`:
+
+``` pre
+DROP OWNED BY sally;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The `DROP OWNED` statement is a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[REASSIGN OWNED](REASSIGN-OWNED.html), [DROP ROLE](DROP-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-RESOURCE-QUEUE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-RESOURCE-QUEUE.html.md.erb b/reference/sql/DROP-RESOURCE-QUEUE.html.md.erb
new file mode 100644
index 0000000..473923f
--- /dev/null
+++ b/reference/sql/DROP-RESOURCE-QUEUE.html.md.erb
@@ -0,0 +1,65 @@
+---
+title: DROP RESOURCE QUEUE
+---
+
+Removes a resource queue.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP RESOURCE QUEUE <queue_name>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+This command removes a resource queue from HAWQ. To drop a resource queue, the queue cannot have any roles assigned to it, nor can it have any statements waiting in the queue or have any children resource queues. Only a superuser can drop a resource queue.
+
+**Note:** The `pg_root` and `pg_default` resource queues cannot be dropped.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt> \<queue\_name\>   </dt>
+<dd>The name of a resource queue to remove.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+Use [ALTER ROLE](ALTER-ROLE.html) to remove a user from a resource queue.
+
+To see all the currently active queries for all resource queues, perform the following query of the `pg_locks` table joined with the `pg_roles` and `pg_resqueue` tables:
+
+``` pre
+SELECT rolname, rsqname, locktype, objid, transaction, pid, 
+mode, granted FROM pg_roles, pg_resqueue, pg_locks WHERE 
+pg_roles.rolresqueue=pg_locks.objid AND 
+pg_locks.objid=pg_resqueue.oid;
+```
+
+To see the roles assigned to a resource queue, perform the following query of the `pg_roles` and `pg_resqueue` system catalog tables:
+
+``` pre
+SELECT rolname, rsqname FROM pg_roles, pg_resqueue WHERE 
+pg_roles.rolresqueue=pg_resqueue.oid;
+```
+
+## <a id="topic1__section6"></a>Examples
+
+Remove a role from a resource queue (and move the role to the default resource queue, `pg_default`):
+
+``` pre
+ALTER ROLE bob RESOURCE QUEUE NONE;
+```
+
+Remove the resource queue named `adhoc`:
+
+``` pre
+DROP RESOURCE QUEUE adhoc;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+The `DROP RESOURCE QUEUE` statement is a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE RESOURCE QUEUE](CREATE-RESOURCE-QUEUE.html), [ALTER ROLE](ALTER-ROLE.html), [ALTER RESOURCE QUEUE](ALTER-RESOURCE-QUEUE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-ROLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-ROLE.html.md.erb b/reference/sql/DROP-ROLE.html.md.erb
new file mode 100644
index 0000000..b1d305b
--- /dev/null
+++ b/reference/sql/DROP-ROLE.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: DROP ROLE
+---
+
+Removes a database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP ROLE [IF EXISTS] <name> [, ...]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP ROLE` removes the specified role(s). To drop a superuser role, you must be a superuser yourself. To drop non-superuser roles, you must have `CREATEROLE` privilege.
+
+A role cannot be removed if it is still referenced in any database; an error will be raised if so. Before dropping the role, you must drop all the objects it owns (or reassign their ownership) and revoke any privileges the role has been granted. The `REASSIGN           OWNED` and `DROP OWNED` commands can be useful for this purpose.
+
+However, it is not necessary to remove role memberships involving the role; `DROP ROLE` automatically revokes any memberships of the target role in other roles, and of other roles in the target role. The other roles are not dropped nor otherwise affected.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the role does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name of the role to remove.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the roles named `sally` and `bob`:
+
+``` pre
+DROP ROLE sally, bob;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+The SQL standard defines `DROP ROLE`, but it allows only one role to be dropped at a time, and it specifies different privilege requirements than HAWQ uses.
+
+## <a id="topic1__section7"></a>See Also
+
+[ALTER ROLE](ALTER-ROLE.html), [CREATE ROLE](CREATE-ROLE.html), [DROP OWNED](DROP-OWNED.html), [REASSIGN OWNED](REASSIGN-OWNED.html), [SET ROLE](SET-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-SCHEMA.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-SCHEMA.html.md.erb b/reference/sql/DROP-SCHEMA.html.md.erb
new file mode 100644
index 0000000..8d7846f
--- /dev/null
+++ b/reference/sql/DROP-SCHEMA.html.md.erb
@@ -0,0 +1,45 @@
+---
+title: DROP SCHEMA
+---
+
+Removes a schema.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP SCHEMA [IF EXISTS] <name> [, ...] [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP SCHEMA` removes schemas from the database. A schema can only be dropped by its owner or a superuser. Note that the owner can drop the schema (and thereby all contained objects) even if he does not own some of the objects within the schema.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the schema does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name of the schema to remove.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drops any objects contained in the schema (tables, functions, etc.).</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the schema if it contains any objects. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the schema `mystuff` from the database, along with everything it contains:
+
+``` pre
+DROP SCHEMA mystuff CASCADE;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`DROP SCHEMA` is fully conforming with the SQL standard, except that the standard only allows one schema to be dropped per command. Also, the `IF           EXISTS` option is a HAWQ extension.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE SCHEMA](CREATE-SCHEMA.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-SEQUENCE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-SEQUENCE.html.md.erb b/reference/sql/DROP-SEQUENCE.html.md.erb
new file mode 100644
index 0000000..59c0d85
--- /dev/null
+++ b/reference/sql/DROP-SEQUENCE.html.md.erb
@@ -0,0 +1,45 @@
+---
+title: DROP SEQUENCE
+---
+
+Removes a sequence.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP SEQUENCE [IF EXISTS]  <name> [, ...] [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP SEQUENCE` removes a sequence generator table. You must own the sequence to drop it (or be a superuser).
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the sequence does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of the sequence to remove.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the sequence.</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the sequence if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the sequence `myserial`:
+
+``` pre
+DROP SEQUENCE myserial;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`DROP SEQUENCE` is fully conforming with the SQL standard, except that the standard only allows one sequence to be dropped per command. Also, the `IF           EXISTS` option is a HAWQ extension.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE SEQUENCE](CREATE-SEQUENCE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/DROP-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/DROP-TABLE.html.md.erb b/reference/sql/DROP-TABLE.html.md.erb
new file mode 100644
index 0000000..98022ce
--- /dev/null
+++ b/reference/sql/DROP-TABLE.html.md.erb
@@ -0,0 +1,47 @@
+---
+title: DROP TABLE
+---
+
+Removes a table.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+DROP TABLE [IF EXISTS] <name> [, ...] [CASCADE | RESTRICT]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`DROP TABLE` removes tables from the database. Only its owner may drop a table. To empty a table of rows without removing the table definition, use `DELETE` or `TRUNCATE`.
+
+`DROP TABLE` always removes any indexes, rules, and constraints that exist for the target table. However, to drop a table that is referenced by a view, `CASCADE` must be specified. `CASCADE` will remove a dependent view entirely.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>IF EXISTS  </dt>
+<dd>Do not throw an error if the table does not exist. A notice is issued in this case.</dd>
+
+<dt>\<name\>   </dt>
+<dd>The name (optionally schema-qualified) of the table to remove.</dd>
+
+<dt>CASCADE  </dt>
+<dd>Automatically drop objects that depend on the table (such as views).</dd>
+
+<dt>RESTRICT  </dt>
+<dd>Refuse to drop the table if any objects depend on it. This is the default.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Remove the table `mytable`:
+
+``` pre
+DROP TABLE mytable;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`DROP TABLE` is fully conforming with the SQL standard, except that the standard only allows one table to be dropped per command. Also, the `IF           EXISTS` option is a HAWQ extension.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE TABLE](CREATE-TABLE.html), [ALTER TABLE](ALTER-TABLE.html), [TRUNCATE](TRUNCATE.html)