You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hawq.apache.org by yo...@apache.org on 2016/08/29 16:46:41 UTC

[06/36] incubator-hawq-docs git commit: moving book configuration to new 'book' branch, for HAWQ-1027

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/COPY.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/COPY.html.md.erb b/reference/sql/COPY.html.md.erb
new file mode 100644
index 0000000..6069aa5
--- /dev/null
+++ b/reference/sql/COPY.html.md.erb
@@ -0,0 +1,256 @@
+---
+title: COPY
+---
+
+Copies data between a file and a table.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+COPY <table> [(<column> [, ...])] FROM {'<file>' | STDIN}
+�����[ [WITH] 
+�������[OIDS]
+�������[HEADER]
+�������[DELIMITER [ AS ] '<delimiter>']
+�������[NULL [ AS ] '<null string>']
+�������[ESCAPE [ AS ] '<escape>' | 'OFF']
+�������[NEWLINE [ AS ] 'LF' | 'CR' | 'CRLF']
+�������[CSV [QUOTE [ AS ] '<quote>'] 
+������������[FORCE NOT NULL <column> [, ...]]
+�������[FILL MISSING FIELDS]
+  �����[[LOG ERRORS INTO <error_table> [KEEP] 
+�������SEGMENT REJECT LIMIT <count> [ROWS | PERCENT] ]
+
+COPY {<table> [(<column> [, ...])] | (<query>)} TO {'<file>' | STDOUT}
+������[ [WITH] 
+��������[OIDS]
+��������[HEADER]
+��������[DELIMITER [ AS ] '<delimiter>']
+��������[NULL [ AS ] '<null string>']
+��������[ESCAPE [ AS ] '<escape>' | 'OFF']
+��������[CSV [QUOTE [ AS ] '<quote>'] 
+�������������[FORCE QUOTE <column> [, ...]] ]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`COPY` moves data between HAWQ tables and standard file-system files. `COPY TO` copies the contents of a table to a file, while `COPY FROM` copies data from a file to a table (appending the data to whatever is in the table already). `COPY TO` can also copy the results of a `SELECT` query.
+
+If a list of columns is specified, `COPY` will only copy the data in the specified columns to or from the file. If there are any columns in the table that are not in the column list, `COPY FROM` will insert the default values for those columns.
+
+`COPY` with a file name instructs the HAWQ master host to directly read from or write to a file. The file must be accessible to the master host and the name must be specified from the viewpoint of the master host. When `STDIN` or `STDOUT` is specified, data is transmitted via the connection between the client and the master.
+
+If `SEGMENT REJECT LIMIT` is used, then a `COPY FROM` operation will operate in single row error isolation mode. In this release, single row error isolation mode only applies to rows in the input file with format errors \u2014 for example, extra or missing attributes, attributes of a wrong data type, or invalid client encoding sequences. Constraint errors such as violation of a `NOT  NULL`, `CHECK`, or `UNIQUE` constraint will still be handled in 'all-or-nothing' input mode. The user can specify the number of error rows acceptable (on a per-segment basis), after which the entire `COPY FROM` operation will be aborted and no rows will be loaded. Note that the count of error rows is per-segment, not per entire load operation. If the per-segment reject limit is not reached, then all rows not containing an error will be loaded. If the limit is not reached, all good rows will be loaded and any error rows discarded. If you would like to keep error rows for further examination, you can optiona
 lly declare an error table using the `LOG ERRORS INTO` clause. Any rows containing a format error would then be logged to the specified error table.
+
+**Outputs**
+
+On successful completion, a `COPY` command returns a command tag of the form, where \<count\> is the number of rows copied:
+
+``` pre
+COPY <count>
+            
+```
+
+If running a `COPY FROM` command in single row error isolation mode, the following notice message will be returned if any rows were not loaded due to format errors, where \<count\> is the number of rows rejected:
+
+``` pre
+NOTICE: Rejected <count> badly formatted rows.
+```
+
+## <a id="topic1__section5"></a>Parameters
+
+<dt> \<table\>   </dt>
+<dd>The name (optionally schema-qualified) of an existing table.</dd>
+
+<dt> \<column\>   </dt>
+<dd>An optional list of columns to be copied. If no column list is specified, all columns of the table will be copied.</dd>
+
+<dt> \<query\>   </dt>
+<dd>A `SELECT` or `VALUES` command whose results are to be copied. Note that parentheses are required around the query.</dd>
+
+<dt> \<file\>   </dt>
+<dd>The absolute path name of the input or output file.</dd>
+
+<dt>STDIN  </dt>
+<dd>Specifies that input comes from the client application.</dd>
+
+<dt>STDOUT  </dt>
+<dd>Specifies that output goes to the client application.</dd>
+
+<dt>OIDS  </dt>
+<dd>Specifies copying the OID for each row. (An error is raised if OIDS is specified for a table that does not have OIDs, or in the case of copying a query.)</dd>
+
+<dt> \<delimiter\>   </dt>
+<dd>The single ASCII character that separates columns within each row (line) of the file. The default is a tab character in text mode, a comma in `CSV` mode.</dd>
+
+<dt> \<null string\>   </dt>
+<dd>The string that represents a null value. The default is `\N` (backslash-N) in text mode, and a empty value with no quotes in `CSV` mode. You might prefer an empty string even in text mode for cases where you don't want to distinguish nulls from empty strings. When using `COPY FROM`, any data item that matches this string will be stored as a null value, so you should make sure that you use the same string as you used with `COPY TO`.</dd>
+
+<dt> \<escape\>   </dt>
+<dd>Specifies the single character that is used for C escape sequences (such as `\n`,`\t`,`\100`, and so on) and for quoting data characters that might otherwise be taken as row or column delimiters. Make sure to choose an escape character that is not used anywhere in your actual column data. The default escape character is `\` (backslash) for text files or `"` (double quote) for CSV files, however it is possible to specify any other character to represent an escape. It is also possible to disable escaping on text-formatted files by specifying the value '`OFF'` as the escape value. This is very useful for data such as web log data that has many embedded backslashes that are not intended to be escapes.</dd>
+
+<dt>NEWLINE  </dt>
+<dd>Specifies the newline used in your data files \u2014 `LF` (Line feed, 0x0A), `CR` (Carriage return, 0x0D), or `CRLF` (Carriage return plus line feed, 0x0D 0x0A). If not specified, a HAWQ segment will detect the newline type by looking at the first row of data it receives and using the first newline type encountered.</dd>
+
+<dt>CSV  </dt>
+<dd>Selects Comma Separated Value (CSV) mode.</dd>
+
+<dt>HEADER  </dt>
+<dd>Specifies that a file contains a header line with the names of each column in the file. On output, the first line contains the column names from the table, and on input, the first line is ignored.</dd>
+
+<dt> \<quote\>   </dt>
+<dd>Specifies the quotation character in CSV mode. The default is double-quote.</dd>
+
+<dt>FORCE QUOTE  </dt>
+<dd>In `CSV COPY TO` mode, forces quoting to be used for all non-`NULL` values in each specified column. `NULL` output is never quoted.</dd>
+
+<dt>FORCE NOT NULL  </dt>
+<dd>In `CSV COPY FROM` mode, process each specified column as though it were quoted and hence not a `NULL` value. For the default null string in `CSV` mode (nothing between two delimiters), this causes missing values to be evaluated as zero-length strings.</dd>
+
+<dt>FILL MISSING FIELDS  </dt>
+<dd>In `COPY FROM` more for both `TEXT` and `CSV`, specifying `FILL MISSING FIELDS` will set missing trailing field values to `NULL` (instead of reporting an error) when a row of data has missing data fields at the end of a line or row. Blank rows, fields with a `NOT NULL` constraint, and trailing delimiters on a line will still report an error.</dd>
+
+<dt>LOG ERRORS INTO \<error\_table\> \[KEEP\]  </dt>
+
+<dd>This is an optional clause that can precede a `SEGMENT REJECT LIMIT` clause to log information about rows with formatting errors. The `INTO <error_table>` clause specifies an error table where rows with formatting errors will be logged when running in single row error isolation mode. You can then examine this error table to see error rows that were not loaded (if any). If the \<error\_table\> specified already exists, it will be used. If it does not exist, it will be automatically generated. If the command auto-generates the error table and no errors are produced, the default is to drop the error table after the operation completes unless `KEEP` is specified. If the table is auto-generated and the error limit is exceeded, the entire transaction is rolled back and no error data is saved. If you want the error table to persist in this case, create the error table prior to running the `COPY`. An error table is defined as follows:
+
+
+``` pre
+CREATE TABLE <error_table_name> ( cmdtime timestamptz, relname text, 
+    filename text, linenum int, bytenum int, errmsg text, 
+    rawdata text, rawbytes bytea ) DISTRIBUTED RANDOMLY;
+```
+</dd>
+
+<dt>SEGMENT REJECT LIMIT \<count\> \[ROWS | PERCENT\]  </dt>
+<dd>Runs a `COPY FROM` operation in single row error isolation mode. If the input rows have format errors they will be discarded provided that the reject limit count is not reached on any HAWQ segment instance during the load operation. The reject limit count can be specified as number of rows (the default) or percentage of total rows (1-100). If `PERCENT` is used, each segment starts calculating the bad row percentage only after the number of rows specified by the parameter `gp_reject_percent_threshold` has been processed. The default for `gp_reject_percent_threshold` is 300 rows. Constraint errors such as violation of a `NOT NULL` or `CHECK` constraint will still be handled in 'all-or-nothing' input mode. If the limit is not reached, all good rows will be loaded and any error rows discarded.</dd>
+
+## <a id="topic1__section6"></a>Notes
+
+`COPY` can only be used with tables, not with views. However, you can write `COPY (SELECT * FROM viewname) TO ...`
+
+The `BINARY` key word causes all data to be stored/read as binary format rather than as text. It is somewhat faster than the normal text mode, but a binary-format file is less portable across machine architectures and HAWQ versions. Also, you cannot run `COPY FROM` in single row error isolation mode if the data is in binary format.
+
+You must have `SELECT` privilege on the table whose values are read by `COPY TO`, and insert privilege on the table into which values are inserted by `COPY FROM`.
+
+Files named in a `COPY` command are read or written directly by the database server, not by the client application. Therefore, they must reside on or be accessible to the HAWQ master host machine, not the client. They must be accessible to and readable or writable by the HAWQ system user (the user ID the server runs as), not the client. `COPY` naming a file is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.
+
+`COPY FROM` will invoke any check constraints on the destination table. However, it will not invoke rewrite rules. Note that in this release, violations of constraints are not evaluated for single row error isolation mode.
+
+`COPY` input and output is affected by `DateStyle`. To ensure portability to other HAWQ installations that might use non-default `DateStyle` settings, `DateStyle` should be set to ISO before using `COPY TO`.
+
+By default, `COPY` stops operation at the first error. This should not lead to problems in the event of a `COPY TO`, but the target table will already have received earlier rows in a `COPY FROM`. These rows will not be visible or accessible, but they still occupy disk space. This may amount to a considerable amount of wasted disk space if the failure happened well into a large `COPY FROM` operation. You may wish to invoke `VACUUM` to recover the wasted space. Another option would be to use single row error isolation mode to filter out error rows while still loading good rows.
+
+COPY supports creating readable foreign tables with error tables. The default for concurrently inserting into the error table is 127.�You can use�error tables with foreign tables under the following circumstances:
+
+-   Multiple foreign tables can�use different error tables
+-   Multiple foreign tables cannot use�the same�error table
+
+## <a id="topic1__section7"></a>File Formats
+
+File formats supported by `COPY`.
+
+**Text Format**
+When `COPY` is used without the `BINARY` or `CSV` options, the data read or written is a text file with one line per table row. Columns in a row are separated by the \<delimiter\> character (tab by default). The column values themselves are strings generated by the output function, or acceptable to the input function, of each attribute's data type. The specified null string is used in place of columns that are null. `COPY             FROM` will raise an error if any line of the input file contains more or fewer columns than are expected. If `OIDS` is specified, the OID is read or written as the first column, preceding the user data columns.
+
+The data file has two reserved characters that have special meaning to `COPY`:
+
+-   The designated delimiter character (tab by default), which is used to separate fields in the data file.
+-   A UNIX-style line feed (`\n` or `0x0a`), which is used to designate a new row in the data file. It is strongly recommended that applications generating `COPY` data convert data line feeds to UNIX-style line feeds rather than Microsoft Windows style carriage return line feeds (`\r\n` or `0x0a 0x0d`).
+
+If your data contains either of these characters, you must escape the character so `COPY` treats it as data and not as a field separator or new row.
+
+By default, the escape character is a `\` (backslash) for text-formatted files and a `"` (double quote) for csv-formatted files. If you want to use a different escape character, you can do so using the `ESCAPE AS `clause. Make sure to choose an escape character that is not used anywhere in your data file as an actual data value. You can also disable escaping in text-formatted files by using `ESCAPE 'OFF'`.
+
+For example, suppose you have a table with three columns and you want to load the following three fields using COPY.
+
+-   percentage sign = %
+-   vertical bar = |
+-   backslash = \\
+
+Your designated \<delimiter\> character is `|` (pipe character), and your designated \<escape\> character is `*` (asterisk). The formatted row in your data file would look like this:
+
+``` pre
+percentage sign = % | vertical bar = *| | backslash = \
+```
+
+Notice how the pipe character that is part of the data has been escaped using the asterisk character (\*). Also notice that we do not need to escape the backslash since we are using an alternative escape character.
+
+The following characters must be preceded by the escape character if they appear as part of a column value: the escape character itself, newline, carriage return, and the current delimiter character. You can specify a different escape character using the `ESCAPE             AS` clause.
+
+**CSV Format**
+
+This format is used for importing and exporting the Comma Separated Value (CSV) file format used by many other programs, such as spreadsheets. Instead of the escaping used by HAWQ standard text mode, it produces and recognizes the common CSV escaping mechanism.
+
+The values in each record are separated by the `DELIMITER` character. If the value contains the delimiter character, the `QUOTE` character, the `ESCAPE` character (which is double quote by default), the `NULL` string, a carriage return, or line feed character, then the whole value is prefixed and suffixed by the `QUOTE` character. You can also use `FORCE QUOTE` to force quotes when outputting non-`NULL` values in specific columns.
+
+The CSV format has no standard way to distinguish a `NULL` value from an empty string. HAWQ `COPY` handles this by quoting. A `NULL` is output as the `NULL` string and is not quoted, while a data value matching the `NULL` string is quoted. Therefore, using the default settings, a `NULL` is written as an unquoted empty string, while an empty string is written with double quotes (""). Reading values follows similar rules. You can use `FORCE NOT NULL` to prevent `NULL` input comparisons for specific columns.
+
+Because backslash is not a special character in the `CSV` format, `\.`, the end-of-data marker, could also appear as a data value. To avoid any misinterpretation, a `\.` data value appearing as a lone entry on a line is automatically quoted on output, and on input, if quoted, is not interpreted as the end-of-data marker. If you are loading a file created by another application that has a single unquoted column and might have a value of `\.`, you might need to quote that value in the input file.
+
+**Note:** In `CSV` mode, all characters are significant. A quoted value surrounded by white space, or any characters other than `DELIMITER`, will include those characters. This can cause errors if you import data from a system that pads CSV lines with white space out to some fixed width. If such a situation arises you might need to preprocess the CSV file to remove the trailing white space, before importing the data into HAWQ.
+
+**Note:** `CSV` mode will both recognize and produce CSV files with quoted values containing embedded carriage returns and line feeds. Thus the files are not strictly one line per table row like text-mode files.
+
+**Note:** Many programs produce strange and occasionally perverse CSV files, so the file format is more a convention than a standard. Thus you might encounter some files that cannot be imported using this mechanism, and `COPY` might produce files that other programs cannot process.
+
+**Binary Format**
+
+The `BINARY` format consists of a file header, zero or more tuples containing the row data, and a file trailer. Headers and data are in network byte order.
+
+-   **File Header** \u2014 The file header consists of 15 bytes of fixed fields, followed by a variable-length header extension area. The fixed fields are:
+    -   **Signature** \u2014 11-byte sequence PGCOPY\\n\\377\\r\\n\\0 \u2014 note that the zero byte is a required part of the signature. (The signature is designed to allow easy identification of files that have been munged by a non-8-bit-clean transfer. This signature will be changed by end-of-line-translation filters, dropped zero bytes, dropped high bits, or parity changes.)
+    -   **Flags field** \u2014 32-bit integer bit mask to denote important aspects of the file format. Bits are numbered from 0 (LSB) to 31 (MSB). Note that this field is stored in network byte order (most significant byte first), as are all the integer fields used in the file format. Bits 16-31 are reserved to denote critical file format issues; a reader should abort if it finds an unexpected bit set in this range. Bits 0-15 are reserved to signal backwards-compatible format issues; a reader should simply ignore any unexpected bits set in this range. Currently only one flag is defined, and the rest must be zero (Bit 16: 1 if data has OIDs, 0 if not).
+    -   **Header extension area length** \u2014 32-bit integer, length in bytes of remainder of header, not including self. Currently, this is zero, and the first tuple follows immediately. Future changes to the format might allow additional data to be present in the header. A reader should silently skip over any header extension data it does not know what to do with. The header extension area is envisioned to contain a sequence of self-identifying chunks. The flags field is not intended to tell readers what is in the extension area. Specific design of header extension contents is left for a later release.
+-   **Tuples** \u2014 Each tuple begins with a 16-bit integer count of the number of fields in the tuple. (Presently, all tuples in a table will have the same count, but that might not always be true.) Then, repeated for each field in the tuple, there is a 32-bit length word followed by that many bytes of field data. (The length word does not include itself, and can be zero.) As a special case, -1 indicates a NULL field value. No value bytes follow in the NULL case.
+
+    There is no alignment padding or any other extra data between fields.
+
+    Presently, all data values in a COPY BINARY file are assumed to be in binary format (format code one). It is anticipated that a future extension may add a header field that allows per-column format codes to be specified.
+
+    If OIDs are included in the file, the OID field immediately follows the field-count word. It is a normal field except that it's not included in the field-count. In particular it has a length word \u2014 this will allow handling of 4-byte vs. 8-byte OIDs without too much pain, and will allow OIDs to be shown as null if that ever proves desirable.
+
+-   **File Trailer** \u2014 The file trailer consists of a 16-bit integer word containing `-1`. This is easily distinguished from a tuple's field-count word. A reader should report an error if a field-count word is neither `-1` nor the expected number of columns. This provides an extra check against somehow getting out of sync with the data.
+
+## <a id="topic1__section11"></a>Examples
+
+Copy a table to the client using the vertical bar (|) as the field delimiter:
+
+``` pre
+COPY country TO STDOUT WITH DELIMITER '|';
+```
+
+Copy data from a file into the `country` table:
+
+``` pre
+COPY country FROM '/home/usr1/sql/country_data';
+```
+
+Copy into a file just the countries whose names start with 'A':
+
+``` pre
+COPY (SELECT * FROM country WHERE country_name LIKE 'A%') TO 
+'/home/usr1/sql/a_list_countries.copy';
+```
+
+Create an error table called `err_sales` to use with single row error isolation mode:
+
+``` pre
+CREATE TABLE err_sales ( cmdtime timestamptz, relname text, 
+filename text, linenum int, bytenum int, errmsg text, rawdata text, rawbytes bytea ) DISTRIBUTED RANDOMLY;
+```
+
+Copy data from a file into the `sales` table using single row error isolation mode:
+
+``` pre
+COPY sales FROM '/home/usr1/sql/sales_data' LOG ERRORS INTO 
+err_sales SEGMENT REJECT LIMIT 10 ROWS;
+```
+
+## <a id="topic1__section12"></a>Compatibility
+
+There is no `COPY` statement in the SQL standard.
+
+## <a id="topic1__section13"></a>See Also
+
+[CREATE EXTERNAL TABLE](CREATE-EXTERNAL-TABLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CREATE-AGGREGATE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-AGGREGATE.html.md.erb b/reference/sql/CREATE-AGGREGATE.html.md.erb
new file mode 100644
index 0000000..a195224
--- /dev/null
+++ b/reference/sql/CREATE-AGGREGATE.html.md.erb
@@ -0,0 +1,162 @@
+---
+title: CREATE AGGREGATE
+---
+
+Defines a new aggregate function.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [ORDERED] AGGREGATE <name> (<input_data_type> [ , ... ]) 
+������( SFUNC = <sfunc>,
+��������STYPE = <state_data_type>
+��������[, PREFUNC = <prefunc>]
+��������[, FINALFUNC = <ffunc>]
+��������[, INITCOND = <initial_condition>]
+��������[, SORTOP = <sort_operator>] )
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE AGGREGATE` defines a new aggregate function. Some basic and commonly-used aggregate functions such as `count`, `min`, `max`, `sum`, `avg` and so on are already provided in HAWQ. If one defines new types or needs an aggregate function not already provided, then `CREATE AGGREGATE` can be used to provide the desired features.
+
+An aggregate function is identified by its name and input data types. Two aggregate functions in the same schema can have the same name if they operate on different input types. The name and input data types of an aggregate function must also be distinct from the name and input data types of every ordinary function in the same schema.
+
+An aggregate function is made from one, two or three ordinary functions (all of which must be `IMMUTABLE` functions):
+
+-   A state transition function \<sfunc\>
+-   An optional preliminary segment-level calculation function \<prefunc\>
+-   An optional final calculation function \<ffunc\>
+
+These functions are used as follows:
+
+``` pre
+sfunc( internal-state, next-data-values ) ---> next-internal-state
+prefunc( internal-state, internal-state ) ---> next-internal-state
+ffunc( internal-state ) ---> aggregate-value
+```
+
+You can specify `PREFUNC` as method for optimizing aggregate execution. By specifying `PREFUNC`, the aggregate can be executed in parallel on segments first and then on the master. When a two-level execution is performed, `SFUNC` is executed on the segments to generate partial aggregate results, and `PREFUNC` is executed on the master to aggregate the partial results from segments. If single-level aggregation is performed, all the rows are sent to the master and \<sfunc\> is applied to the rows.
+
+Single-level aggregation and two-level aggregation are equivalent execution strategies. Either type of aggregation can be implemented in a query plan. When you implement the functions \<prefunc\> and \<sfunc\>, you must ensure that the invocation of \<sfunc\> on the segment instances followed by \<prefunc\> on the master produce the same result as single-level aggregation that sends all the rows to the master and then applies only the \<sfunc\> to the rows.
+
+HAWQ creates a temporary variable of data type \<stype\> to hold the current internal state of the aggregate function. At each input row, the aggregate argument values are calculated and the state transition function is invoked with the current state value and the new argument values to calculate a new internal state value. After all the rows have been processed, the final function is invoked once to calculate the aggregate return value. If there is no final function then the ending state value is returned as-is.
+
+An aggregate function can provide an optional initial condition, an initial value for the internal state value. This is specified and stored in the database as a value of type text, but it must be a valid external representation of a constant of the state value data type. If it is not supplied then the state value starts out `NULL`.
+
+If the state transition function is declared `STRICT`, then it cannot be called with `NULL` inputs. With such a transition function, aggregate execution behaves as follows. Rows with any null input values are ignored (the function is not called and the previous state value is retained). If the initial state value is `NULL`, then at the first row with all non-null input values, the first argument value replaces the state value, and the transition function is invoked at subsequent rows with all non-null input values. This is useful for implementing aggregates like `max`. Note that this behavior is only available when \<state\_data\_type\> is the same as the first \<input\_data\_type\>. When these types are different, you must supply a non-null initial condition or use a nonstrict transition function.
+
+If the state transition function is not declared `STRICT`, then it will be called unconditionally at each input row, and must deal with `NULL` inputs and `NULL` transition values for itself. This allows the aggregate author to have full control over the aggregate handling of `NULL` values.
+
+If the final function is declared `STRICT`, then it will not be called when the ending state value is `NULL`; instead a `NULL` result will be returned automatically. (This is the normal behavior of `STRICT` functions.) In any case the final function has the option of returning a `NULL` value. For example, the final function for `avg` returns `NULL` when it sees there were zero input rows.
+
+Single argument aggregate functions, such as min or max, can sometimes be optimized by looking into an index instead of scanning every input row. If this aggregate can be so optimized, indicate it by specifying a sort operator. The basic requirement is that the aggregate must yield the first element in the sort ordering induced by the operator; in other words:
+
+``` pre
+SELECT agg(col) FROM tab; 
+```
+
+must be equivalent to:
+
+``` pre
+SELECT col FROM tab ORDER BY col USING sortop LIMIT 1;
+```
+
+Further assumptions are that the aggregate function ignores `NULL` inputs, and that it delivers a `NULL` result if and only if there were no non-null inputs. Ordinarily, a data type's `<` operator is the proper sort operator for `MIN`, and `>` is the proper sort operator for `MAX`. Note that the optimization will never actually take effect unless the specified operator is the "less than" or "greater than" strategy member of a B-tree index operator class.
+
+**Ordered Aggregates**
+
+If the optional qualification `ORDERED` appears, the created aggregate function is an *ordered aggregate*. In this case, the preliminary aggregation function, `prefunc` cannot be specified.
+
+An ordered aggregate is called with the following syntax.
+
+``` pre
+<name> ( <arg> [ , ... ] [ORDER BY <sortspec> [ , ...]] ) 
+```
+
+If the optional `ORDER BY` is omitted, a system-defined ordering is used. The transition function \<sfunc\> of an ordered aggregate function is called on its input arguments in the specified order and on a single segment. There is a new column `aggordered` in the `pg_aggregate` table to indicate the aggregate function is defined as an ordered aggregate.
+
+## <a id="topic1__section5"></a>Parameters
+
+<dt> \<name\>   </dt>
+<dd>The name (optionally schema-qualified) of the aggregate function to create.</dd>
+
+<dt> \<input\_data\_type\>   </dt>
+<dd>An input data type on which this aggregate function operates. To create a zero-argument aggregate function, write \* in place of the list of input data types. An example of such an aggregate is `count(*)`.</dd>
+
+<dt> \<sfunc\>   </dt>
+<dd>The name of the state transition function to be called for each input row. For an N-argument aggregate function, the \<sfunc\> must take N+1 arguments, the first being of type \<state\_data\_type\> and the rest matching the declared input data types of the aggregate. The function must return a value of type \<state\_data\_type\>. This function takes the current state value and the current input data values, and returns the next state value.</dd>
+
+<dt> \<state\_data\_type\>   </dt>
+<dd>The data type for the aggregate state value.</dd>
+
+<dt> \<prefunc\>   </dt>
+<dd>The name of a preliminary aggregation function. This is a function of two arguments, both of type \<state\_data\_type\>. It must return a value of \<state\_data\_type\>. A preliminary function takes two transition state values and returns a new transition state value representing the combined aggregation. In HAWQ, if the result of the aggregate function is computed in a segmented fashion, the preliminary aggregation function is invoked on the individual internal states in order to combine them into an ending internal state.
+
+Note that this function is also called in hash aggregate mode within a segment. Therefore, if you call this aggregate function without a preliminary function, hash aggregate is never chosen. Since hash aggregate is efficient, consider defining preliminary function whenever possible.
+
+PREFUNC is optional. If defined, it is executed on master. Input to PREFUNC is partial results from segments, and not the tuples. If PREFUNC is not defined, the aggregate cannot be executed in parallel. PREFUNC and gp\_enable\_multiphase\_agg are used as follows:
+
+-   gp\_enable\_multiphase\_agg = off: SFUNC is executed sequentially on master. PREFUNC, even if defined, is unused.
+-   gp\_enable\_multiphase\_agg = on and PREFUNC is defined: SFUNC is executed in parallel, on segments. PREFUNC is invoked on master to aggregate partial results from segments.�
+
+    ``` pre
+    CREATE OR REPLACE FUNCTION my_avg_accum(bytea,bigint) returns bytea as 'int8_avg_accum' language internal strict immutable;  
+    CREATE OR REPLACE FUNCTION my_avg_merge(bytea,bytea) returns bytea as 'int8_avg_amalg' language internal strict immutable;  
+    CREATE OR REPLACE FUNCTION my_avg_final(bytea) returns numeric as 'int8_avg' language internal strict immutable;  
+    CREATE AGGREGATE my_avg(bigint) (   stype = bytea,sfunc = my_avg_accum,prefunc = my_avg_merge,finalfunc = my_avg_final,initcond = ''  );
+    ```
+</dd>
+
+<dt> \<ffunc\>   </dt>
+<dd>The name of the final function called to compute the aggregate result after all input rows have been traversed. The function must take a single argument of type `state_data_type`. The return data type of the aggregate is defined as the return type of this function. If \<ffunc\> is not specified, then the ending state value is used as the aggregate result, and the return type is \<state\_data\_type\>.</dd>
+
+<dt> \<initial\_condition\>   </dt>
+<dd>The initial setting for the state value. This must be a string constant in the form accepted for the data type \<state\_data\_type\>. If not specified, the state value starts out `NULL`.</dd>
+
+<dt> \<sort\_operator\>   </dt>
+<dd>The associated sort operator for a MIN- or MAX-like aggregate function. This is just an operator name (possibly schema-qualified). The operator is assumed to have the same input data types as the aggregate function (which must be a single-argument aggregate function).</dd>
+
+## <a id="topic1__section6"></a>Notes
+
+The ordinary functions used to define a new aggregate function must be defined first. Note that in this release of HAWQ, it is required that the \<sfunc\>, \<ffunc\>, and \<prefunc\> functions used to create the aggregate are defined as `IMMUTABLE`.
+
+Any compiled code (shared library files) for custom functions must be placed in the same location on every host in your HAWQ array (master and all segments). This location must also be in the `LD_LIBRARY_PATH` so that the server can locate the files.
+
+## Examples
+
+Create a sum of cubes aggregate:
+
+``` pre
+CREATE FUNCTION scube_accum(numeric, numeric) RETURNS numeric 
+    AS 'select $1 + $2 * $2 * $2' 
+    LANGUAGE SQL 
+    IMMUTABLE 
+    RETURNS NULL ON NULL INPUT;
+CREATE AGGREGATE scube(numeric) ( 
+    SFUNC = scube_accum, 
+    STYPE = numeric, 
+    INITCOND = 0 );
+```
+
+To test this aggregate:
+
+``` pre
+CREATE TABLE x(a INT);
+INSERT INTO x VALUES (1),(2),(3);
+SELECT scube(a) FROM x;
+```
+
+Correct answer for reference:
+
+``` pre
+SELECT sum(a*a*a) FROM x;
+```
+
+## <a id="topic1__section8"></a>Compatibility
+
+`CREATE AGGREGATE` is a HAWQ language extension. The SQL standard does not provide for user-defined aggregate functions.
+
+## <a id="topic1__section9"></a>See Also
+
+[ALTER AGGREGATE](ALTER-AGGREGATE.html), [DROP AGGREGATE](DROP-AGGREGATE.html), [CREATE FUNCTION](CREATE-FUNCTION.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CREATE-DATABASE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-DATABASE.html.md.erb b/reference/sql/CREATE-DATABASE.html.md.erb
new file mode 100644
index 0000000..7ebab4e
--- /dev/null
+++ b/reference/sql/CREATE-DATABASE.html.md.erb
@@ -0,0 +1,86 @@
+---
+title: CREATE DATABASE
+---
+
+Creates a new database.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE DATABASE <database_name> [s[WITH] <database_attribute>=<value> [ ... ]]
+```
+where \<database\_attribute\> is:
+ 
+``` pre
+	[OWNER=<database_owner>]
+����[TEMPLATE=<template>]
+����[ENCODING=<encoding>]
+����[TABLESPACE=<tablespace>]
+����[CONNECTION LIMIT=<connection_limit>]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE DATABASE` creates a new database. To create a database, you must be a superuser or have the special `CREATEDB` privilege.
+
+The creator becomes the owner of the new database by default. Superusers can create databases owned by other users by using the `OWNER` clause. They can even create databases owned by users with no special privileges. Non-superusers with `CREATEDB` privilege can only create databases owned by themselves.
+
+By default, the new database will be created by cloning the standard system database `template1`. A different template can be specified by writing `TEMPLATE <template>`. In particular, by writing `TEMPLATE template0`, you can create a clean database containing only the standard objects predefined by HAWQ. This is useful if you wish to avoid copying any installation-local objects that may have been added to `template1`.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>\<database_name\></dt>
+<dd>The name of a database to create.
+
+**Note:** HAWQ reserves the database name "hcatalog" for system use.</dd>
+
+<dt>OWNER=\<database_owner\> </dt>
+<dd>The name of the database user who will own the new database, or `DEFAULT` to use the default owner (the user executing the command).</dd>
+
+<dt>TEMPLATE=\<template\> </dt>
+<dd>The name of the template from which to create the new database, or `DEFAULT` to use the default template (*template1*).</dd>
+
+<dt>ENCODING=\<encoding\> </dt>
+<dd>Character set encoding to use in the new database. Specify a string constant (such as `'SQL_ASCII'`), an integer encoding number, or `DEFAULT` to use the default encoding.</dd>
+
+<dt>TABLESPACE=\<tablespace\> </dt>
+<dd>The name of the tablespace that will be associated with the new database, or `DEFAULT` to use the template database's tablespace. This tablespace will be the default tablespace used for objects created in this database.</dd>
+
+<dt>CONNECTION LIMIT=\<connection_limit\></dt>
+<dd>The maximum number of concurrent connections possible. The default of -1 means there is no limitation.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+`CREATE DATABASE` cannot be executed inside a transaction block.
+
+When you copy a database by specifying its name as the template, no other sessions can be connected to the template database while it is being copied. New connections to the template database are locked out until `CREATE DATABASE` completes.
+
+The `CONNECTION LIMIT` is not enforced against superusers.
+
+## <a id="topic1__section6"></a>Examples
+
+To create a new database:
+
+``` pre
+CREATE DATABASE gpdb;
+```
+
+To create a database `sales` owned by user `salesapp` with a default tablespace of `salesspace`:
+
+``` pre
+CREATE DATABASE sales OWNER=salesapp TABLESPACE=salesspace;
+```
+
+To create a database `music` which supports the ISO-8859-1 character set:
+
+``` pre
+CREATE DATABASE music ENCODING='LATIN1';
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+There is no `CREATE DATABASE` statement in the SQL standard. Databases are equivalent to catalogs, whose creation is implementation-defined.
+
+## <a id="topic1__section8"></a>See Also
+
+[DROP DATABASE](DROP-DATABASE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb b/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
new file mode 100644
index 0000000..2b164dc
--- /dev/null
+++ b/reference/sql/CREATE-EXTERNAL-TABLE.html.md.erb
@@ -0,0 +1,333 @@
+---
+title: CREATE EXTERNAL TABLE
+---
+
+Defines a new external table.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [READABLE] EXTERNAL TABLE <table_name>�����
+    ( <column_name>
+            <data_type> [, ...] | LIKE <other_table> )
+������LOCATION ('gpfdist://<filehost>[:<port>]/<file_pattern>[#<transform>]' [, ...])
+��������| ('gpfdists://<filehost>[:<port>]/<file_pattern>[#<transform>]' [, ...])
+        | ('pxf://<host>[:<port>]/<path-to-data><pxf parameters>') 
+������FORMAT 'TEXT' 
+������������[( [HEADER]
+���������������[DELIMITER [AS] '<delimiter>' | 'OFF']
+���������������[NULL [AS] '<null string>']
+���������������[ESCAPE [AS] '<escape>' | 'OFF']
+���������������[NEWLINE [ AS ] 'LF' | 'CR' | 'CRLF']
+���������������[FILL MISSING FIELDS] )]
+�����������| 'CSV'
+������������[( [HEADER]
+���������������[QUOTE [AS] '<quote>'] 
+�������������� [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[FORCE NOT NULL <column> [, ...]]
+���������������[ESCAPE [AS] '<escape>']
+���������������[NEWLINE [ AS ] 'LF' | 'CR' | 'CRLF']
+���������������[FILL MISSING FIELDS] )]
+�����������| 'CUSTOM' (Formatter=<formatter specifications>)
+�����[ ENCODING '<encoding>' ]
+ ��� [ [LOG ERRORS INTO <error_table>] SEGMENT REJECT LIMIT <count>
+�������[ROWS | PERCENT] ]
+
+CREATE [READABLE] EXTERNAL WEB TABLE <table_name>�����
+   ( <column_name>
+            <data_type> [, ...] | LIKE <other_table> )
+������LOCATION ('http://<webhost>[:<port>]/<path>/<file>' [, ...])
+����| EXECUTE '<command>' ON { MASTER | <number_of_segments> | SEGMENT #<num> }
+������FORMAT 'TEXT' 
+������������[( [HEADER]
+���������������[DELIMITER [AS] '<delimiter>' | 'OFF']
+���������������[NULL [AS] '<null string>']
+���������������[ESCAPE [AS] '<escape>' | 'OFF']
+���������������[NEWLINE [ AS ] 'LF' | 'CR' | 'CRLF']
+���������������[FILL MISSING FIELDS] )]
+�����������| 'CSV'
+������������[( [HEADER]
+���������������[QUOTE [AS] '<quote>'] 
+�������������� [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[FORCE NOT NULL <column> [, ...]]
+���������������[ESCAPE [AS] '<escape>']
+���������������[NEWLINE [ AS ] 'LF' | 'CR' | 'CRLF']
+���������������[FILL MISSING FIELDS] )]
+�����������| 'CUSTOM' (Formatter=<formatter specifications>)
+�����[ ENCODING '<encoding>' ]
+�����[ [LOG ERRORS INTO <error_table>] SEGMENT REJECT LIMIT <count>
+�������[ROWS | PERCENT] ]
+
+CREATE WRITABLE EXTERNAL TABLE <table_name>
+����( <column_name>
+            <data_type> [, ...] | LIKE <other_table> )
+�����LOCATION('gpfdist://<outputhost>[:<port>]/<filename>[#<transform>]'
+������| ('gpfdists://<outputhost>[:<port>]/<file_pattern>[#<transform>]'
+����������[, ...])
+      | ('pxf://<host>[:<port>]/<path-to-data>?<pxf parameters>'
+������FORMAT 'TEXT' 
+���������������[( [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[ESCAPE [AS] '<escape>' | 'OFF'] )]
+����������| 'CSV'
+���������������[([QUOTE [AS] '<quote>'] 
+�������������� [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[FORCE QUOTE <column> [, ...]] ]
+���������������[ESCAPE [AS] '<escape>'] )]
+�����������| 'CUSTOM' (Formatter=<formatter specifications>)
+����[ ENCODING '<write_encoding>' ]
+����[ DISTRIBUTED BY (<column>, [ ... ] ) | DISTRIBUTED RANDOMLY ]
+
+CREATE WRITABLE EXTERNAL WEB TABLE <table_name>
+����( <column_name>
+            <data_type> [, ...] | LIKE <other_table> )
+����EXECUTE '<command>' ON #<num>
+    FORMAT 'TEXT' 
+���������������[( [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[ESCAPE [AS] '<escape>' | 'OFF'] )]
+����������| 'CSV'
+���������������[([QUOTE [AS] '<quote>'] 
+�������������� [DELIMITER [AS] '<delimiter>']
+���������������[NULL [AS] '<null string>']
+���������������[FORCE QUOTE <column> [, ...]] ]
+���������������[ESCAPE [AS] '<escape>'] )]
+����������| 'CUSTOM' (Formatter=<formatter specifications>)
+����[ ENCODING '<write_encoding>' ]
+����[ DISTRIBUTED BY (<column>, [ ... ] ) | DISTRIBUTED RANDOMLY ]
+```
+
+where \<pxf parameters\> is:
+
+``` pre
+   ?FRAGMENTER=<class>&ACCESSOR=<class>&RESOLVER=<class>[&<custom-option>=<value>...]
+ | ?PROFILE=<profile-name>[&<custom-option>=<value>...]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE EXTERNAL TABLE` or `CREATE EXTERNAL WEB TABLE` creates a new readable external table definition in HAWQ. Readable external tables are typically used for fast, parallel data loading. Once an external table is defined, you can query its data directly (and in parallel) using SQL commands. For example, you can select, join, or sort external table data. You can also create views for external tables. DML operations (`UPDATE`, `INSERT`, `DELETE`, or`           TRUNCATE`) are not allowed on readable external tables.
+
+`CREATE WRITABLE EXTERNAL TABLE` or `CREATE WRITABLE EXTERNAL WEB           TABLE` creates a new writable external table definition in HAWQ. Writable external tables are typically used for unloading data from the database into a set of files or named pipes.
+
+Writable external web tables can also be used to output data to an executable program. Once a writable external table is defined, data can be selected from database tables and inserted into the writable external table. Writable external tables only allow `INSERT` operations \u2013 `SELECT`, `UPDATE`, `DELETE`, or `TRUNCATE` are not allowed.
+
+Regular readable external tables can access static flat files or, by using HAWQ Extensions Framework (PXF), data from other sources. PXF plug-ins are included for HDFS, HBase, and Hive tables. Custom plug-ins can be created for other external data sources using the PXF API.
+
+Web external tables access dynamic data sources \u2013 either on a web server or by executing OS commands or scripts.
+
+The LOCATION clause specifies the location of the external data. The location string begins with a protocol string that specifies the storage type and protocol used to access the data. The `gpfdist://` protocol specifies data files served by one or more instances of the Greenplum parallel file distribution server `gpfdist`. The `http://` protocol specifies one or more HTTP URLs and is used with web tables. The `pxf://` protocol specifies data accessed through the PXF service, which provides access to data in a Hadoop system. Using the PXF API, you can create PXF plug-ins to provide HAWQ access to any other data source.
+
+**Note:** The `file://` protocol is deprecated. Instead, use the `gpfdist://`, `gpfdists://`, or `pxf://` protocol, or the `COPY` command instead.
+
+The `FORMAT` clause is used to describe how external table files are formatted. Valid flat file formats, including files in HDFS, are delimited text (`TEXT`) and comma separated values (`CSV`) format for `gpfdist` protocols. If the data in the file does not use the default column delimiter, escape character, null string, and so on, you must specify the additional formatting options so that the data in the external file is read correctly by HAWQ.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>READABLE | WRITABLE  </dt>
+<dd>Specifiies the type of external table, readable being the default. Readable external tables are used for loading data into HAWQ. Writable external tables are used for unloading data.</dd>
+
+<dt>WEB  </dt>
+<dd>Creates a readable or writable web external table definition in HAWQ. There are two forms of readable web external tables \u2013 those that access files via the `http://` protocol or those that access data by executing OS commands. Writable web external tables output data to an executable program that can accept an input stream of data. Web external tables are not rescannable during query execution.</dd>
+
+<dt> \<table\_name\>   </dt>
+<dd>The name of the new external table.</dd>
+
+<dt> \<column\_name\>   </dt>
+<dd>The name of a column to create in the external table definition. Unlike regular tables, external tables do not have column constraints or default values, so do not specify those.</dd>
+
+<dt>LIKE \<other\_table\>   </dt>
+<dd>The `LIKE` clause specifies a table from which the new external table automatically copies all column names, data types and HAWQ distribution policy. If the original table specifies any column constraints or default column values, those will not be copied over to the new external table definition.</dd>
+
+<dt> \<data\_type\>   </dt>
+<dd>The data type of the column.</dd>
+
+<dt>LOCATION ('\<protocol\>://\<host\>\[:\<port\>\]/\<path\>/\<file\>' \[, ...\])   </dt>
+<dd>For readable external tables, specifies the URI of the external data source(s) to be used to populate the external table or web table. Regular readable external tables allow the `file`, `gpfdist`, and `pxf` protocols. Web external tables allow the `http` protocol. If \<port\> is omitted, the `http` and `gpfdist` protocols assume port `8080` and the `pxf` protocol assumes the \<host\> is a high availability nameservice string. If using the `gpfdist` protocol, the \<path\> is relative to the directory from which `gpfdist` is serving files (the directory specified when you started the `gpfdist` program). Also, the \<path\> can use wildcards (or other C-style pattern matching) in the \<file\> name part of the location to denote multiple files in a directory. For example:
+
+``` pre
+'gpfdist://filehost:8081/*'
+'gpfdist://masterhost/my_load_file'
+'http://intranet.example.com/finance/expenses.csv'
+'pxf://mdw:41200/sales/*.csv?Profile=HDFS'
+```
+
+For writable external tables, specifies the URI location of the `gpfdist` process that will collect data output from the HAWQ segments and write it to the named file. The \<path\> is relative to the directory from which `gpfdist` is serving files (the directory specified when you started the `gpfdist` program). If multiple `gpfdist` locations are listed, the segments sending data will be evenly divided across the available output locations. For example:
+
+``` pre
+'gpfdist://outputhost:8081/data1.out',
+'gpfdist://outputhost:8081/data2.out'
+```
+
+With two `gpfdist` locations listed as in the above example, half of the segments would send their output data to the `data1.out` file and the other half to the `data2.out` file.
+
+For the `pxf` protocol, the `LOCATION` string specifies the \<host\> and \<port\> of the PXF service, the location of the data, and the PXF plug-ins (Java classes) used to convert the data between storage format and HAWQ format. If the \<port\> is omitted, the \<host\> is taken to be the logical name for the high availability name service and the \<port\> is the value of the `pxf_service_port` configuration variable, 51200 by default. The URL parameters `FRAGMENTER`, `ACCESSOR`, and `RESOLVER` are the names of PXF plug-ins (Java classes) that convert between the external data format and HAWQ data format. The `FRAGMENTER` parameter is only used with readable external tables. PXF allows combinations of these parameters to be configured as profiles so that a single `PROFILE` parameter can be specified to access external data, for example `?PROFILE=Hive`. Additional \<custom-options\>` can be added to the LOCATION URI to further describe the external data format or storage options (see 
 [Additional Options](../../pxf/HDFSFileDataPXF.html#additionaloptions)). For details about the plug-ins and profiles provided with PXF and information about creating custom plug-ins for other data sources see [Working with PXF and External Data](../../pxf/HawqExtensionFrameworkPXF.html).</dd>
+
+<dt>EXECUTE '\<command\>' ON ...  </dt>
+<dd>Allowed for readable web external tables or writable external tables only. For readable web external tables, specifies the OS command to be executed by the segment instances. The \<command\> can be a single OS command or a script. If \<command\> executes a script, that script must reside in the same location on all of the segment hosts and be executable by the HAWQ superuser (`gpadmin`).
+
+For writable external tables, the \<command\> specified in the `EXECUTE` clause must be prepared to have data piped into it, as segments having data to send write their output to the specified program. HAWQ uses virtual elastic segments to run its queries.
+
+The `ON` clause is used to specify which segment instances will execute the given command. For writable external tables, only `ON` \<number\> is supported.
+
+**Note:** ON ALL/HOST is deprecated when creating a readable external table, as HAWQ cannot guarantee scheduling executors on a specific host. Instead, use `ON                 MASTER`, `ON <number>`, or `SEGMENT <virtual_segment>` to specify which segment instances will execute the command.
+
+-   `ON MASTER` runs the command on the master host only.
+-   `ON <number>` means the command will be executed by the specified number of virtual segments. The particular segments are chosen by the HAWQ system's Resource Manager at runtime.
+-   `ON SEGMENT <virtual_segment>` means the command will be executed only once by the specified segment.
+</dd>
+
+<dt>FORMAT 'TEXT | CSV' (\<options\>)   </dt>
+<dd>Specifies the format of the external or web table data - either plain text (`TEXT`) or comma separated values (`CSV`) format.</dd>
+
+<dt>DELIMITER  </dt>
+<dd>Specifies a single ASCII character that separates columns within each row (line) of data. The default is a tab character in `TEXT` mode, a comma in `CSV` mode. In `TEXT` mode for readable external tables, the delimiter can be set to `OFF` for special use cases in which unstructured data is loaded into a single-column table.</dd>
+
+<dt>NULL  </dt>
+<dd>Specifies the string that represents a `NULL` value. The default is `\N` (backslash-N) in `TEXT` mode, and an empty value with no quotations in `CSV` mode. You might prefer an empty string even in `TEXT` mode for cases where you do not want to distinguish `NULL` values from empty strings. When using external and web tables, any data item that matches this string will be considered a `NULL` value.</dd>
+
+<dt>ESCAPE  </dt>
+<dd>Specifies the single character that is used for C escape sequences (such as `\n`,`\t`,`\100`, and so on) and for escaping data characters that might otherwise be taken as row or column delimiters. Make sure to choose an escape character that is not used anywhere in your actual column data. The default escape character is a \\ (backslash) for text-formatted files and a `"` (double quote) for csv-formatted files, however it is possible to specify another character to represent an escape. It is also possible to disable escaping in text-formatted files by specifying the value `'OFF'` as the escape value. This is very useful for data such as text-formatted web log data that has many embedded backslashes that are not intended to be escapes.</dd>
+
+<dt>NEWLINE  </dt>
+<dd>Specifies the newline used in your data files \u2013 `LF` (Line feed, 0x0A), `CR` (Carriage return, 0x0D), or `CRLF` (Carriage return plus line feed, 0x0D 0x0A). If not specified, a HAWQ segment will detect the newline type by looking at the first row of data it receives and using the first newline type encountered.</dd>
+
+<dt>HEADER  </dt>
+<dd>For readable external tables, specifies that the first line in the data file(s) is a header row (contains the names of the table columns) and should not be included as data for the table. If using multiple data source files, all files must have a header row.
+
+**Note:** The `HEADER` formatting option is not allowed with PXF.
+For CSV files or other files that include a header line, use an error table instead of the `HEADER` formatting option.</dd>
+
+<dt>QUOTE  </dt>
+<dd>Specifies the quotation character for `CSV` mode. The default is double-quote (`"`).</dd>
+
+<dt>FORCE NOT NULL  </dt>
+<dd>In `CSV` mode, processes each specified column as though it were quoted and hence not a `NULL` value. For the default null string in `CSV` mode (nothing between two delimiters), this causes missing values to be evaluated as zero-length strings.</dd>
+
+<dt>FORCE QUOTE  </dt>
+<dd>In `CSV` mode for writable external tables, forces quoting to be used for all non-`NULL` values in each specified column. `NULL` output is never quoted.</dd>
+
+<dt>FILL MISSING FIELDS  </dt>
+<dd>In both `TEXT` and `CSV` mode for readable external tables, specifying `FILL MISSING FIELDS` will set missing trailing field values to `NULL` (instead of reporting an error) when a row of data has missing data fields at the end of a line or row. Blank rows, fields with a `NOT               NULL` constraint, and trailing delimiters on a line will still report an error.</dd>
+
+<dt>ENCODING '\<encoding\>'   </dt>
+<dd>Character set encoding to use for the external table. Specify a string constant (such as `'SQL_ASCII'`), an integer encoding number, or `DEFAULT` to use the default client encoding.</dd>
+
+<dt>LOG ERRORS INTO \<error\_table\>  </dt>
+<dd>This is an optional clause that can precede a `SEGMENT REJECT LIMIT` clause to log information about rows with formatting errors. It specifies an error table where rows with formatting errors will be logged when running in single row error isolation mode. You can then examine this \<error\_table\> to see error rows that were not loaded (if any). If the \<error\_table\> specified already exists, it will be used. If it does not exist, it will be automatically generated.</dd>
+
+<dt>SEGMENT REJECT LIMIT \<count\> \[ROWS | PERCENT\]  </dt>
+<dd>Runs a `COPY FROM` operation in single row error isolation mode. If the input rows have format errors they will be discarded provided that the reject limit \<count\> is not reached on any HAWQ segment instance during the load operation. The reject limit \<count\> can be specified as number of rows (the default) or percentage of total rows (1-100). If `PERCENT` is used, each segment starts calculating the bad row percentage only after the number of rows specified by the parameter `gp_reject_percent_threshold` has been processed. The default for `gp_reject_percent_threshold` is 300 rows. Constraint errors such as violation of a `NOT NULL` or `CHECK` constraint will still be handled in "all-or-nothing" input mode. If the limit is not reached, all good rows will be loaded and any error rows discarded.</dd>
+
+<dt>DISTRIBUTED RANDOMLY  </dt>
+<dd>Used to declare the HAWQ distribution policy for a writable external table. By default, writable external tables are distributed randomly. If the source table you are exporting data from has a hash distribution policy, defining the same distribution key column(s) for the writable external table will improve unload performance by eliminating the need to move rows over the interconnect. When you issue an unload command such as `INSERT INTO wex_table SELECT * FROM                 source_table             `, the rows that are unloaded can be sent directly from the segments to the output location if the two tables have the same hash distribution policy.</dd>
+
+## <a id="topic1__section5"></a>Examples
+
+Start the `gpfdist` file server program in the background on port `8081` serving files from directory `/var/data/staging`:
+
+``` pre
+gpfdist -p 8081 -d /var/data/staging -l /home/gpadmin/log &
+```
+
+Create a readable external table named `ext_customer` using the `gpfdist` protocol and any text formatted files (`*.txt`) found in the `gpfdist` directory. The files are formatted with a pipe (`|`) as the column delimiter and an empty space as `NULL`. Also access the external table in single row error isolation mode:
+
+``` pre
+CREATE EXTERNAL TABLE ext_customer
+���(id int, name text, sponsor text) 
+���LOCATION ( 'gpfdist://filehost:8081/*.txt' ) 
+���FORMAT 'TEXT' ( DELIMITER '|' NULL ' ')
+���LOG ERRORS INTO err_customer SEGMENT REJECT LIMIT 5;
+```
+
+Create the same readable external table definition as above, but with CSV formatted files:
+
+``` pre
+CREATE EXTERNAL TABLE ext_customer 
+���(id int, name text, sponsor text) 
+���LOCATION ( 'gpfdist://filehost:8081/*.csv' ) 
+���FORMAT 'CSV' ( DELIMITER ',' );
+```
+
+Create a readable external table using the `pxf` protocol to read data in HDFS files:
+
+``` pre
+CREATE EXTERNAL TABLE ext_customer 
+    (id int, name text, sponsor text)
+LOCATION ('pxf://mdw:51200/sales/customers/customers.tsv.gz'
+          '?Fragmenter=org.apache.hawq.pxf.plugins.hdfs.HdfsDataFragmenter'
+          '&Accessor=org.apache.hawq.pxf.plugins.hdfs.LineBreakAccessor'
+          '&Resolver=org.apache.hawq.pxf.plugins.hdfs.StringPassResolver')
+FORMAT 'TEXT' (DELIMITER = E'\t');
+```
+
+The `LOCATION` string in this command is equivalent to the previous example, but using a PXF Profile:
+
+``` pre
+CREATE EXTERNAL TABLE ext_customer 
+    (id int, name text, sponsor text)
+LOCATION ('pxf://mdw:51200/sales/customers/customers.tsv.gz?Profile=HdfsTextSimple')
+FORMAT 'TEXT' (DELIMITER = E'\t');
+```
+
+Create a readable web external table that executes a script on five virtual segment hosts. (The script must reside at the same location on all segment hosts.)
+
+``` pre
+CREATE EXTERNAL WEB TABLE log_output (linenum int, message text) �
+EXECUTE '/var/load_scripts/get_log_data.sh' ON 5 
+FORMAT 'TEXT' (DELIMITER '|');
+```
+
+Create a writable external table named `sales_out` that uses `gpfdist` to write output data to a file named `sales.out`. The files are formatted with a pipe (`|`) as the column delimiter and an empty space as `NULL`.
+
+``` pre
+CREATE WRITABLE EXTERNAL TABLE sales_out (LIKE sales) 
+���LOCATION ('gpfdist://etl1:8081/sales.out')
+���FORMAT 'TEXT' ( DELIMITER '|' NULL ' ')
+���DISTRIBUTED BY (txn_id);
+```
+
+The following command sequence shows how to create a writable external web table using a specified number of elastic virtual segments to run the query:
+
+``` pre
+postgres=# CREATE TABLE a (i int);
+CREATE TABLE
+postgres=# INSERT INTO a VALUES(1);
+INSERT 0 1
+postgres=# INSERT INTO a VALUES(2);
+INSERT 0 1
+postgres=# INSERT INTO a VALUES(10);
+INSERT 0 1
+postgres=# CREATE WRITABLE EXTERNAL WEB TABLE externala (output text) 
+postgres-# EXECUTE 'cat > /tmp/externala' ON 3 
+postgres-# FORMAT 'TEXT' DISTRIBUTED RANDOMLY;
+CREATE EXTERNAL TABLE
+postgres=# INSERT INTO externala SELECT * FROM a;
+INSERT 0 3
+```
+
+Create a writable external web table that pipes output data received by the segments to an executable script named `to_adreport_etl.sh`:
+
+``` pre
+CREATE WRITABLE EXTERNAL WEB TABLE campaign_out (LIKE campaign)  
+EXECUTE '/var/unload_scripts/to_adreport_etl.sh'
+FORMAT 'TEXT' (DELIMITER '|');
+```
+
+Use the writable external table defined above to unload selected data:
+
+``` pre
+INSERT INTO campaign_out 
+    SELECT * FROM campaign WHERE customer_id=123;
+```
+
+## <a id="topic1__section6"></a>Compatibility
+
+`CREATE EXTERNAL TABLE` is a HAWQ extension. The SQL standard makes no provisions for external tables.
+
+## <a id="topic1__section7"></a>See Also
+
+[CREATE TABLE](CREATE-TABLE.html), [CREATE TABLE AS](CREATE-TABLE-AS.html), [COPY](COPY.html), [INSERT](INSERT.html), [SELECT INTO](SELECT-INTO.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CREATE-FUNCTION.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-FUNCTION.html.md.erb b/reference/sql/CREATE-FUNCTION.html.md.erb
new file mode 100644
index 0000000..6675752
--- /dev/null
+++ b/reference/sql/CREATE-FUNCTION.html.md.erb
@@ -0,0 +1,190 @@
+---
+title: CREATE FUNCTION
+---
+
+Defines a new function.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [OR REPLACE] FUNCTION <name>����
+    ( [ [<argmode>] [<argname>] <argtype> [, ...] ] )
+������[ RETURNS { [ SETOF ] <rettype>
+��������| TABLE ([{ <argname> <argtype> | LIKE <other table> }
+����������[, ...]])
+��������} ]
+����{ LANGUAGE <langname>
+����| IMMUTABLE | STABLE | VOLATILE
+����| CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT
+����| [EXTERNAL] SECURITY INVOKER | [EXTERNAL] SECURITY DEFINER
+����| AS '<definition>'
+����| AS '<obj_file>', '<link_symbol>' } ...
+����[ WITH ({ DESCRIBE = <describe_function>
+           } [, ...] ) ]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE FUNCTION` defines a new function. `CREATE OR REPLACE                     FUNCTION` will either create a new function, or replace an existing definition.
+
+The name of the new function must not match any existing function with the same argument types in the same schema. However, functions of different argument types may share a name (overloading).
+
+To update the definition of an existing function, use `CREATE OR REPLACE                     FUNCTION`. It is not possible to change the name or argument types of a function this way (this would actually create a new, distinct function). Also, `CREATE OR REPLACE FUNCTION` will not let you change the return type of an existing function. To do that, you must drop and recreate the function. If you drop and then recreate a function, you will have to drop existing objects (rules, views, and so on) that refer to the old function. Use `CREATE OR                     REPLACE FUNCTION` to change a function definition without breaking objects that refer to the function.
+
+For more information about creating functions, see�[User-Defined Functions](../../query/functions-operators.html#topic28).
+
+**Limited Use of VOLATILE and STABLE Functions**
+
+To prevent data from becoming out-of-sync across the segments in HAWQ, any function classified as `STABLE` or `VOLATILE` cannot be executed at the segment level if it contains SQL or modifies the database in any way. For example, functions such as `random()` or `timeofday()` are not allowed to execute on distributed data in HAWQ because they could potentially cause inconsistent data between the segment instances.
+
+To ensure data consistency, `VOLATILE` and `STABLE` functions can safely be used in statements that are evaluated on and execute from the master. For example, the following statements are always executed on the master (statements without a `FROM` clause):
+
+``` pre
+SELECT setval('myseq', 201);
+SELECT foo();
+```
+
+In cases where a statement has a `FROM` clause containing a distributed table and the function used in the `FROM` clause simply returns a set of rows, execution may be allowed on the segments:
+
+``` pre
+SELECT * FROM foo();
+```
+
+One exception to this rule are functions that return a table reference (`rangeFuncs`) or functions that use the `refCursor` data type. Note that you cannot return a `refcursor` from any kind of function in HAWQ.
+
+## <a id="topic1__section5"></a>Parameters
+
+<dt> \<name\>  </dt>
+<dd>The name (optionally schema-qualified) of the function to create.</dd>
+
+<dt> \<argmode\>  </dt>
+<dd>The mode of an argument: either `IN`, `OUT`, or `INOUT`. If omitted, the default is `IN`.</dd>
+
+<dt> \<argname\>  </dt>
+<dd>The name of an argument. Some languages (currently only PL/pgSQL) let you use the name in the function body. For other languages the name of an input argument is just extra documentation. But the name of an output argument is significant, since it defines the column name in the result row type. (If you omit the name for an output argument, the system will choose a default column name.)</dd>
+
+<dt> \<argtype\>  </dt>
+<dd>The data type(s) of the function's arguments (optionally schema-qualified), if any. The argument types may be base, composite, or domain types, or may reference the type of a table column.
+
+Depending on the implementation language it may also be allowed to specify pseudotypes such as `cstring`. Pseudotypes indicate that the actual argument type is either incompletely specified, or outside the set of ordinary SQL data types.
+
+The type of a column is referenced by writing `                             <tablename>.<columnname>%<TYPE>`. Using this feature can sometimes help make a function independent of changes to the definition of a table.</dd>
+
+<dt> \<rettype\>  </dt>
+<dd>The return data type (optionally schema-qualified). The return type can be a base, composite, or domain type, or may reference the type of a table column. Depending on the implementation language it may also be allowed to specify pseudotypes such as `cstring`. If the function is not supposed to return a value, specify `void` as the return type.
+
+When there are `OUT` or `INOUT` parameters, the `RETURNS` clause may be omitted. If present, it must agree with the result type implied by the output parameters: `RECORD` if there are multiple output parameters, or the same type as the single output parameter.
+
+The `SETOF` modifier indicates that the function will return a set of items, rather than a single item.
+
+The type of a column is referenced by writing `                             <tablename>.<columnname>%<TYPE>`.</dd>
+
+<dt> \<langname\>  </dt>
+<dd>The name of the language that the function is implemented in. May be `SQL`, `C`, `internal`, or the name of a user-defined procedural language. See [CREATE LANGUAGE](CREATE-LANGUAGE.html) for the procedural languages supported in HAWQ. For backward compatibility, the name may be enclosed by single quotes.</dd>
+
+<dt>IMMUTABLE  
+STABLE  
+VOLATILE  </dt>
+<dd>These attributes inform the query optimizer about the behavior of the function. At most one choice may be specified. If none of these appear, `VOLATILE` is the default assumption. Since HAWQ currently has limited use of `VOLATILE` functions, if a function is truly `IMMUTABLE`, you must declare it as so to be able to use it without restrictions.
+
+`IMMUTABLE` indicates that the function cannot modify the database and always returns the same result when given the same argument values. It does not do database lookups or otherwise use information not directly present in its argument list. If this option is given, any call of the function with all-constant arguments can be immediately replaced with the function value.
+
+`STABLE` indicates that the function cannot modify the database, and that within a single table scan it will consistently return the same result for the same argument values, but that its result could change across SQL statements. This is the appropriate selection for functions whose results depend on database lookups, parameter values (such as the current time zone), and so on. Also note that the *current\_timestamp* family of functions qualify as stable, since their values do not change within a transaction.
+
+`VOLATILE` indicates that the function value can change even within a single table scan, so no optimizations can be made. Relatively few database functions are volatile in this sense; some examples are `random()`, `currval()`, `timeofday()`. But note that any function that has side-effects must be classified volatile, even if its result is quite predictable, to prevent calls from being optimized away; an example is `setval()`.</dd>
+
+<dt>CALLED ON NULL INPUT  
+RETURNS NULL ON NULL INPUT  
+STRICT  </dt>
+<dd>`CALLED ON NULL INPUT` (the default) indicates that the function will be called normally when some of its arguments are null. It is then the function author's responsibility to check for null values if necessary and respond appropriately. `RETURNS NULL ON NULL                             INPUT` or `STRICT` indicates that the function always returns null whenever any of its arguments are null. If this parameter is specified, the function is not executed when there are null arguments; instead a null result is assumed automatically.</dd>
+
+<dt>\[EXTERNAL\] SECURITY INVOKER  
+\[EXTERNAL\] SECURITY DEFINER  </dt>
+<dd>`SECURITY INVOKER` (the default) indicates that the function is to be executed with the privileges of the user that calls it. `SECURITY DEFINER` specifies that the function is to be executed with the privileges of the user that created it. The key word `EXTERNAL` is allowed for SQL conformance, but it is optional since, unlike in SQL, this feature applies to all functions not just external ones.</dd>
+
+<dt> \<definition\>  </dt>
+<dd>A string constant defining the function; the meaning depends on the language. It may be an internal function name, the path to an object file, an SQL command, or text in a procedural language.</dd>
+
+<dt> \<obj\_file\>, \<link\_symbol\>  </dt>
+<dd>This form of the `AS` clause is used for dynamically loadable C language functions when the function name in the C language source code is not the same as the name of the SQL function. The string \<obj\_file\> is the name of the file containing the dynamically loadable object, and \<link\_symbol\> is the name of the function in the C language source code. If the link symbol is omitted, it is assumed to be the same as the name of the SQL function being defined. A good practice is to locate shared libraries either relative to `$libdir` (which is located at `$GPHOME/lib`) or through the dynamic library path (set by the `dynamic_library_path` server configuration parameter). This simplifies version upgrades if the new installation is at a different location.</dd>
+
+<dt> \<describe\_function\>  </dt>
+<dd>The name of a callback function to execute when a query that calls this function is parsed. The callback function returns a tuple descriptor that indicates the result type.</dd>
+
+## <a id="topic1__section6"></a>Notes
+
+Any compiled code (shared library files) for custom functions must be placed in the same location on every host in your HAWQ array (master and all segments). This location must also be in the `LD_LIBRARY_PATH` so that the server can locate the files. Consider locating shared libraries either relative to `$libdir` (which is located at `$GPHOME/lib`) or through the dynamic library path (set by the `dynamic_library_path` server configuration parameter) on all master segment instances in the HAWQ array.
+
+The full SQL type syntax is allowed for input arguments and return value. However, some details of the type specification (such as the precision field for type *numeric*) are the responsibility of the underlying function implementation and are not recognized or enforced by the `CREATE                     FUNCTION` command.
+
+HAWQ allows function overloading. The same name can be used for several different functions so long as they have distinct argument types. However, the C names of all functions must be different, so you must give overloaded C functions different C names (for example, use the argument types as part of the C names).
+
+Two functions are considered the same if they have the same names and input argument types, ignoring any `OUT` parameters. Thus for example these declarations conflict:
+
+``` pre
+CREATE FUNCTION foo(int) ...
+CREATE FUNCTION foo(int, out text) ...
+```
+
+When repeated `CREATE FUNCTION` calls refer to the same object file, the file is only loaded once. To unload and reload the file, use the `LOAD` command.
+
+To be able to define a function, the user must have the `USAGE` privilege on the language.
+
+It is often helpful to use dollar quoting to write the function definition string, rather than the normal single quote syntax. Without dollar quoting, any single quotes or backslashes in the function definition must be escaped by doubling them. A dollar-quoted string constant consists of a dollar sign (`$`), an optional tag of zero or more characters, another dollar sign, an arbitrary sequence of characters that makes up the string content, a dollar sign, the same tag that began this dollar quote, and a dollar sign. Inside the dollar-quoted string, single quotes, backslashes, or any character can be used without escaping. The string content is always written literally. For example, here are two different ways to specify the string "Dianne's horse" using dollar quoting:
+
+``` pre
+$$Dianne's horse$$
+$SomeTag$Dianne's horse$SomeTag$
+```
+
+## <a id="topic1__section8"></a>Examples
+
+A very simple addition function:
+
+``` pre
+CREATE FUNCTION add(integer, integer) RETURNS integer
+    AS 'select $1 + $2;'
+    LANGUAGE SQL
+    IMMUTABLE
+    RETURNS NULL ON NULL INPUT;
+```
+
+Increment an integer, making use of an argument name, in PL/pgSQL:
+
+``` pre
+CREATE OR REPLACE FUNCTION increment(i integer) RETURNS
+integer AS $$
+        BEGIN
+                RETURN i + 1;
+        END;
+$$ LANGUAGE plpgsql;
+```
+
+Return a record containing multiple output parameters:
+
+``` pre
+CREATE FUNCTION dup(in int, out f1 int, out f2 text)
+    AS $$ SELECT $1, CAST($1 AS text) || ' is text' $$
+    LANGUAGE SQL;
+SELECT * FROM dup(42);
+```
+
+You can do the same thing more verbosely with an explicitly named composite type:
+
+``` pre
+CREATE TYPE dup_result AS (f1 int, f2 text);
+CREATE FUNCTION dup(int) RETURNS dup_result
+    AS $$ SELECT $1, CAST($1 AS text) || ' is text' $$
+    LANGUAGE SQL;
+SELECT * FROM dup(42);
+```
+
+## <a id="topic1__section9"></a>Compatibility
+
+`CREATE FUNCTION` is defined in SQL:1999 and later. The HAWQ version of the command is similar, but not fully compatible. The attributes are not portable, neither are the different available languages.
+
+For compatibility with some other database systems, \<argmode\> can be written either before or after \<argname\>. But only the first way is standard-compliant.
+
+## <a id="topic1__section10"></a>See Also
+
+[ALTER FUNCTION](ALTER-FUNCTION.html), [DROP FUNCTION](DROP-FUNCTION.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CREATE-GROUP.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-GROUP.html.md.erb b/reference/sql/CREATE-GROUP.html.md.erb
new file mode 100644
index 0000000..79cc6aa
--- /dev/null
+++ b/reference/sql/CREATE-GROUP.html.md.erb
@@ -0,0 +1,43 @@
+---
+title: CREATE GROUP
+---
+
+Defines a new database role.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE GROUP <name> [ [WITH] <option> [ ... ] ]
+```
+
+where \<option\> can be:
+
+``` pre
+      SUPERUSER | NOSUPERUSER
+    | CREATEDB | NOCREATEDB
+    | CREATEROLE | NOCREATEROLE
+    | CREATEUSER | NOCREATEUSER
+    | INHERIT | NOINHERIT
+    | LOGIN | NOLOGIN
+    | [ ENCRYPTED | UNENCRYPTED ] PASSWORD '<password>'
+    | VALID UNTIL '<timestamp>' 
+    | IN ROLE <rolename> [, ...]
+    | IN GROUP <rolename> [, ...]
+    | ROLE <rolename> [, ...]
+    | ADMIN <rolename> [, ...]
+    | USER <rolename> [, ...]
+    | SYSID <uid>
+         
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE GROUP` has been replaced by [CREATE ROLE](CREATE-ROLE.html), although it is still accepted for backwards compatibility.
+
+## <a id="topic1__section4"></a>Compatibility
+
+There is no `CREATE GROUP` statement in the SQL standard.
+
+## <a id="topic1__section5"></a>See Also
+
+[CREATE ROLE](CREATE-ROLE.html)

http://git-wip-us.apache.org/repos/asf/incubator-hawq-docs/blob/7514e193/reference/sql/CREATE-LANGUAGE.html.md.erb
----------------------------------------------------------------------
diff --git a/reference/sql/CREATE-LANGUAGE.html.md.erb b/reference/sql/CREATE-LANGUAGE.html.md.erb
new file mode 100644
index 0000000..5a402de
--- /dev/null
+++ b/reference/sql/CREATE-LANGUAGE.html.md.erb
@@ -0,0 +1,93 @@
+---
+title: CREATE LANGUAGE
+---
+
+Defines a new procedural language.
+
+## <a id="topic1__section2"></a>Synopsis
+
+``` pre
+CREATE [PROCEDURAL] LANGUAGE <name>
+
+CREATE [TRUSTED] [PROCEDURAL] LANGUAGE <name>
+�������HANDLER <call_handler> [VALIDATOR <valfunction>]
+```
+
+## <a id="topic1__section3"></a>Description
+
+`CREATE LANGUAGE` registers a new procedural language with a HAWQ database. Subsequently, functions can be defined in this new language. You must be a superuser to register a new language.
+
+When you register a new procedural language, you effectively associate the language name with a call handler that is responsible for executing functions written in that language. For a function written in a procedural language (a language other than C or SQL), the database server has no built-in knowledge about how to interpret the function's source code. The task is passed to a special handler that knows the details of the language. The handler could either do all the work of parsing, syntax analysis, execution, and so on, or it could serve as a bridge between HAWQ and an existing implementation of a programming language. The handler itself is a C language function compiled into a shared object and loaded on demand, just like any other C function.
+
+There are two forms of the `CREATE LANGUAGE` command. In the first form, the user specifies the name of the desired language and the HAWQ server uses the `pg_pltemplate` system catalog to determine the correct parameters. In the second form, the user specifies the language parameters as well as the language name. You can use the second form to create a language that is not defined in `pg_pltemplate`.
+
+When the server finds an entry in the `pg_pltemplate` catalog for the given language name, it will use the catalog data even if the command includes language parameters. This behavior simplifies loading of old dump files, which are likely to contain out-of-date information about language support functions.
+
+## <a id="topic1__section4"></a>Parameters
+
+<dt>TRUSTED  </dt>
+<dd>Ignored if the server has an entry for the specified language name in *pg\_pltemplate*. Specifies that the call handler for the language is safe and does not offer an unprivileged user any functionality to bypass access restrictions. If this key word is omitted when registering the language, only users with the superuser privilege can use this language to create new functions.</dd>
+
+<dt>PROCEDURAL  </dt>
+<dd>Indicates that this is a procedural language.</dd>
+
+<dt> \<name\>   </dt>
+<dd>The name of the new procedural language. The language name is case insensitive. The name must be unique among the languages in the database. Built-in support is included for `plpgsql`, `plpython`, `plpythonu`, and `plr`. `plpgsql` is installed by default in HAWQ.</dd>
+
+<dt>HANDLER \<call\_handler\>   </dt>
+<dd>Ignored if the server has an entry for the specified language name in `pg_pltemplate`. The name of a previously registered function that will be called to execute the procedural language functions. The call handler for a procedural language must be written in a compiled language such as C with version 1 call convention and registered with HAWQ as a function taking no arguments and returning the `language_handler` type, a placeholder type that is simply used to identify the function as a call handler.</dd>
+
+<dt>VALIDATOR \<valfunction\>   </dt>
+<dd>Ignored if the server has an entry for the specified language name in `pg_pltemplate`. \<valfunction\> is the name of a previously registered function that will be called when a new function in the language is created, to validate the new function. If no validator function is specified, then a new function will not be checked when it is created. The validator function must take one argument of type `oid`, which will be the OID of the to-be-created function, and will typically return `void`.
+
+A validator function would typically inspect the function body for syntactical correctness, but it can also look at other properties of the function, for example if the language cannot handle certain argument types. To signal an error, the validator function should use the `ereport()` function. The return value of the function is ignored.</dd>
+
+## <a id="topic1__section5"></a>Notes
+
+The procedural language packages included in the standard HAWQ distribution are:
+
+-   `PL/pgSQL` - registered in all databases by default
+-   `PL/Perl`
+-   `PL/Python`
+-   `PL/Java`
+
+HAWQ supports a language handler for `PL/R`, but the `PL/R` language package is not pre-installed with HAWQ.
+
+The system catalog `pg_language` records information about the currently installed languages.
+
+To create functions in a procedural language, a user must have the `USAGE` privilege for the language. By default, `USAGE` is granted to `PUBLIC` (everyone) for trusted languages. This may be revoked if desired.
+
+Procedural languages are local to individual databases. However, a language can be installed into the `template1` database, which will cause it to be available automatically in all subsequently-created databases.
+
+The call handler function and the validator function (if any) must already exist if the server does not have an entry for the language in `pg_pltemplate`. But when there is an entry, the functions need not already exist; they will be automatically defined if not present in the database.
+
+Any shared library that implements a language must be located in the same `LD_LIBRARY_PATH` location on all segment hosts in your HAWQ array.
+
+## <a id="topic1__section6"></a>Examples
+
+The preferred way of creating any of the standard procedural languages in a database:
+
+``` pre
+CREATE LANGUAGE plr;
+CREATE LANGUAGE plpythonu;
+CREATE LANGUAGE plperl;
+```
+
+For a language not known in the `pg_pltemplate` catalog:
+
+``` pre
+CREATE FUNCTION plsample_call_handler() RETURNS 
+language_handler
+    AS '$libdir/plsample'
+    LANGUAGE C;
+CREATE LANGUAGE plsample
+    HANDLER plsample_call_handler;
+```
+
+## <a id="topic1__section7"></a>Compatibility
+
+`CREATE LANGUAGE` is a HAWQ extension.
+
+## <a id="topic1__section8"></a>See Also
+
+[CREATE FUNCTION](CREATE-FUNCTION.html)